[Support] selfhosters.net's Template Repository


Recommended Posts

On 6/8/2022 at 2:51 PM, DMills said:

 

nvm, I fixed it myself, edited the sqlite3 database and added the Dirty column, it's working fine now.

 

Could you share what type you set the column to? and how you made it handle null values? I have the same issue, but a simple insert of that column didn't work. Would much appriciate if you could share the sqlite code you used to add the column :)

Edited by crowdedlight
Link to comment
15 hours ago, crowdedlight said:

Could you share what type you set the column to? and how you made it handle null values? I have the same issue, but a simple insert of that column didn't work. Would much appriciate if you could share the sqlite code you used to add the column :)

 

I ended up adding an integer column called Dirty and set it to nullable, since there were already values present. I checked on the focalboard server and they're aware of the issue and so it should be fixed soon? Not running the container anymore so can't say for sure, sorry!

  • Like 1
Link to comment

Hi all,

I'm having an issue with Graylog.

It was working fine, then I upgraded Unraid from 6.8 to 6.10.

Since then Graylog stops after a short time.

It stopped processing messages and the journal fills up.

If I restart the container, it will process the journal for a time, then stop again.

When it stops, I get this error:

2022-06-17 11:21:57,189 ERROR: com.google.common.util.concurrent.ServiceManager - Service LocalKafkaMessageQueueReader [FAILED] has failed in the RUNNING state.
java.lang.IllegalStateException: Invalid message size: 0
        at org.graylog.shaded.kafka09.log.FileMessageSet.searchFor(FileMessageSet.scala:141) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.LogSegment.translateOffset(LogSegment.scala:105) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.LogSegment.read(LogSegment.scala:148) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.Log.read(Log.scala:506) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.read(LocalKafkaJournal.java:677) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.readNext(LocalKafkaJournal.java:617) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.read(LocalKafkaJournal.java:599) ~[graylog.jar:?]
        at org.graylog2.shared.messageq.localkafka.LocalKafkaMessageQueueReader.run(LocalKafkaMessageQueueReader.java:110) ~[graylog.jar:?]
        at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:67) [graylog.jar:?]
        at com.google.common.util.concurrent.Callables$4.run(Callables.java:121) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332]

 

I cannot work out why.

Elasticsearch is fine, according to it's logs.

I have no pipelines and I tried removing all extractors, same error.

Link to comment
1 minute ago, Tom Sealey said:

Hi all,

I'm having an issue with Graylog.

It was working fine, then I upgraded Unraid from 6.8 to 6.10.

Since then Graylog stops after a short time.

It stopped processing messages and the journal fills up.

If I restart the container, it will process the journal for a time, then stop again.

When it stops, I get this error:

2022-06-17 11:21:57,189 ERROR: com.google.common.util.concurrent.ServiceManager - Service LocalKafkaMessageQueueReader [FAILED] has failed in the RUNNING state.
java.lang.IllegalStateException: Invalid message size: 0
        at org.graylog.shaded.kafka09.log.FileMessageSet.searchFor(FileMessageSet.scala:141) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.LogSegment.translateOffset(LogSegment.scala:105) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.LogSegment.read(LogSegment.scala:148) ~[graylog.jar:?]
        at org.graylog.shaded.kafka09.log.Log.read(Log.scala:506) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.read(LocalKafkaJournal.java:677) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.readNext(LocalKafkaJournal.java:617) ~[graylog.jar:?]
        at org.graylog2.shared.journal.LocalKafkaJournal.read(LocalKafkaJournal.java:599) ~[graylog.jar:?]
        at org.graylog2.shared.messageq.localkafka.LocalKafkaMessageQueueReader.run(LocalKafkaMessageQueueReader.java:110) ~[graylog.jar:?]
        at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:67) [graylog.jar:?]
        at com.google.common.util.concurrent.Callables$4.run(Callables.java:121) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332]

 

I cannot work out why.

Elasticsearch is fine, according to it's logs.

I have no pipelines and I tried removing all extractors, same error.

Also,

I have tried several different versions of the Graylog Docker, currently sat on 4.2.8

Link to comment
On 6/12/2022 at 1:45 PM, LumberJackGeek said:

bumping this?

Alright, after dumping several hours into "unpackerr" speaking with the dev on discord, i do not recommend unpackerr as a solution. The dev is ignorant and refuses to make any improvements, and when i offered to fix the bugs myself, he said he would reject them, highly recommend anyone who is considering this project to look elsewhere

Edited by LumberJackGeek
Link to comment
16 hours ago, LumberJackGeek said:

Alright, after dumping several hours into "unpackerr" speaking with the dev on discord, i do not recommend unpackerr as a solution. The dev is ignorant and refuses to make any improvements, and when i offered to fix the bugs myself, he said he would reject them, highly recommend anyone who is considering this project to look elsewhere

 

Well, it isn't a bug that you ran into. It is an issue with your system being slow. I wouldn't have accepted the change you where suggesting either as that is not what unpackerr was designed to do. It is designed to watch the queue of the starr apps and unpack what it says needs unpacked. Cut and dry. The fact you have thousands of trash items in your queue is not the fault of unpackerr, maybe you should clean up your mess and stop trying to download so much trash at once and then blame other peoples software for your self caused issues.

 

Your opinion is fine and well but just because the dev of a project wont accept a PR of a users opinion doesnt make a person ignorant. Well i guess it does, in this case it is you whining like a little baby because you didn't get a change you wanted on his server and then leaving & now doing it here lol. Make the change and run it from source if you are that passionate about it, as most people typically do when their changes dont make it into a live build that tens or hundreds of thousands of people use.

 

His app is better off for you not using it and taking your BS elsewhere. I honestly wish we had a way to block clowns like you from using any of the Starr apps just for cause.

 

And yes, yall look elsewhere by all means. Then go ask all the people what they use when it comes to unpacking things the right way and end up right back at unpackerr or make your own. Actually yes, please do. Make your own, put it on GH and lets see what you end up with boss man.

Edited by nitsua
typo
Link to comment
21 hours ago, LumberJackGeek said:

Alright, after dumping several hours into "unpackerr" speaking with the dev on discord, i do not recommend unpackerr as a solution. The dev is ignorant and refuses to make any improvements, and when i offered to fix the bugs myself, he said he would reject them, highly recommend anyone who is considering this project to look elsewhere

That's simply not how our conversation went. Anyone who wants to read it is welcome to join my Discord and check it out: https://golift.io/discord

 

While you're there, checkout the hundreds of people I've helped; just scroll up. Look how much time I gave LumberJack trying to help him with his 2700 items stuck in the Sonarr queue. I even solicited help from nitsua and bakerboy when I was stumped. I've closed 137 issues, most of them opened by other people: https://github.com/davidnewhall/unpackerr/issues.

 

I'm an asshole, but calling me ignorant because you don't know how to use the software that I've put 3 free years into is a bit much. Telling people I refuse to make the software better is an outright lie. These words you've chosen make you an entitled prick. You said you can fix it, so fork it and fix it, and quit telling people to look for solutions besides unpackerr; none exist. 

 

Better check your attitude in the community of free software. Good luck.

 

Thanks,

-The Unpackerr Dev

  • Like 1
Link to comment

did anyone manage to set up mail invoices in invoiceninja-v5, because from what i see is set up properly but is not working, some digging on youtube found some information, connecting with google, but those options are missing from this app, any way to make it work ?

Link to comment
  • 2 weeks later...

Hi,
I'm trying to get traccar running as a Docker.

 

I did download the traccar.xml and put it into the folder that I selected as host path.

 

I do not get any error messages when applying:

 

Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='traccar2' --net='br0' --ip='192.168.178.13' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8082'='8082' -e 'TCP_PORT_5000-5150'='5000-5150' -e 'UDP_PORT_5000-5150'='5000-5150' -v '/mnt/user/appdata/traccar/logs':'/opt/traccar/logs':'rw' -v '/mnt/user/appdata/traccar/':'/opt/traccar/conf/traccar.xml':'rw' --restart always --hostname traccar 'traccar/traccar'
903a019ed9bef0cb8a80ababf42e9fd94a1eb56c51949d9bc3f9ba62ed7f6b91

The command finished successfully!

 

When I want to start the docker, though, I receive an "Execution Error- bad parameter".

 

My fav would be running it as "br0", but I dod get the same error in each other scenario.


There is no conflicting IP, btw.

 

 

 

What I did just see ion the log is that:

 

Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered blocking state
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered disabled state
Jul 3 16:29:11 Server kernel: device veth78599bd entered promiscuous mode
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered blocking state
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered forwarding state
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered disabled state
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered disabled state
Jul 3 16:29:11 Server kernel: device veth78599bd left promiscuous mode
Jul 3 16:29:11 Server kernel: docker0: port 5(veth78599bd) entered disabled state

 

Any hint?

Thank you.

Edited by Slarti123
Link to comment
5 minutes ago, JonathanM said:

post the docker run command that triggers it.

The problem here is that the container already exists with that name https://forums.unraid.net/topic/125618-docker-port-mapping-broken/#comment-1145250

 

Without trying to actually break the system via some weirdness, my suggestion is to delete the container (and any orphans - Advanced View) and then reinstall via Previous Apps

Link to comment

Thanks, everyone:

Solved.

For the record (and others):

 

1) Traccar:


The template createa a folder called "traccar.xml". You have to delete that and upload the linked XML file, instead (and of course, do not autostart the docker, before).

 

For some reason, that created a container that was never ever able to start again, even though I manually copied as described.

 

Had to remove the container, delete the image, manually delete all data in "appdata" and then set everything up from scratch as desribed above.

2) TVH:

to give you the full picture: still not sure about the initial reasons. Still, similar solution: completely delete container, image and "appdata". New, clean setup. Works now...

Thanks a lot!

Link to comment
  • 2 weeks later...

I am having problems with the `speedtest-tracker` docker.

I get a 500 error when trying to access the UI. The logs are attached below. Any idea about what's going on?

 

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...
usermod: no changes

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 911
User gid: 911
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 40-config: executing...
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please manually download/update the GeoIP2 db and save as /config/geoip2db/GeoLite2-City.mmdb
[cont-init.d] 40-config: exited 0.
[cont-init.d] 50-speedtest: executing...
Copying latest site files to config
Database file exists
Env file exists
Updating packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/1) Installing composer (2.0.13-r0)
Executing busybox-1.32.1-r6.trigger
OK: 161 MiB in 175 packages
Installing dependencies from lock file (including require-dev)
Verifying lock file contents can be installed on current platform.
Nothing to install, update or remove
Package fzaninotto/faker is abandoned, you should avoid using it. No replacement was suggested.
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> @php artisan package:discover --ansi
Discovered Package: [32mbarryvdh/laravel-ide-helper[39m
Discovered Package: [32mfacade/ignition[39m
Discovered Package: [32mfideloper/proxy[39m
Discovered Package: [32mfruitcake/laravel-cors[39m
Discovered Package: [32mhenrywhitaker3/laravel-actions[39m
Discovered Package: [32mlaravel-notification-channels/telegram[39m
Discovered Package: [32mlaravel/slack-notification-channel[39m
Discovered Package: [32mlaravel/tinker[39m
Discovered Package: [32mlaravel/ui[39m
Discovered Package: [32mnesbot/carbon[39m
Discovered Package: [32mnunomaduro/collision[39m
Discovered Package: [32mtymon/jwt-auth[39m
[32mPackage manifest generated successfully.[39m
90 packages you are using are looking for funding.
Use the `composer fund` command to find out more!
Running database migrations
**************************************
* Application In Production! *
**************************************

Do you really wish to run this command? (yes/no) [no]:
> Command Canceled!
Generating app key
Application key set successfully.
JWT secret exists
Slack webhook set, updating db


Not enough arguments (missing: "webhook").


Telegram chat id and bot token unset
Base path is unset
AUTH variable not set. Disabling authentication
Disabling authentication

Illuminate\Database\QueryException

SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `settings` where `name` = auth)

at vendor/laravel/framework/src/Illuminate/Database/Connection.php:678
674▕ // If an exception occurs when attempting to run a query, we'll format the error

675▕ // message to include the bindings with SQL, which will make this exception a
676▕ // lot more helpful to the developer instead of just the database's errors.
677▕ catch (Exception $e) {
➜ 678▕ throw new QueryException(
679▕ $query, $this->prepareBindings($bindings), $e
680▕ );
681▕ }
682▕

[2m+25 vendor frames [22m
26 app/Helpers/SettingsHelper.php:22
Illuminate\Database\Eloquent\Builder::get()

27 app/Helpers/SettingsHelper.php:43
App\Helpers\SettingsHelper::get()
Clearing old jobs from queue

Illuminate\Database\QueryException

SQLSTATE[HY000] [2002] Connection refused (SQL: delete from `jobs`)

at vendor/laravel/framework/src/Illuminate/Database/Connection.php:678
674▕ // If an exception occurs when attempting to run a query, we'll format the error

675▕ // message to include the bindings with SQL, which will make this exception a
676▕ // lot more helpful to the developer instead of just the database's errors.
677▕ catch (Exception $e) {
➜ 678▕ throw new QueryException(
679▕ $query, $this->prepareBindings($bindings), $e
680▕ );
681▕ }
682▕

[2m+19 vendor frames [22m
20 app/Console/Commands/ClearQueueCommand.php:41
Illuminate\Database\Query\Builder::delete()

[2m+13 vendor frames [22m
34 artisan:37
Illuminate\Foundation\Console\Kernel::handle()
[cont-init.d] 50-speedtest: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[cont-init.d] 50-speedtest: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

 

Link to comment

Thanks for the great Backblaze container!

 

I use Nextcloud on my Unraid server to provide syncing on my family's computers.  I wanted to back up the whole Nextcloud folder, but I could not get the D:\ drive to show up in the Backblaze instance even after following the symbolic linking step in the getting started instructions.

 

I found that the Backblaze docker doesn't include the same user ID or group ID as the Nextcloud docker, which is why I couldn't access the folder in the container.

 

I manually added container variables USER_ID and GROUP_ID to the Backblaze container settings.

image.png.bf7344f6388293bcb1e3f248fdd6d3c1.png

 

I can access the whole folder structure now!

 

Maybe consider adding these fields to the Backblaze container template so that it has access to the whole /usr/share structure?

Link to comment

Speedtest Tracker just spontaneously stopped working for me on the 20th of July.

 

I haven't rebooted my machine. Haven't installed anything. Haven't changed anything.

Graph literally just shows that after 3am on July 20th every single speedtest has failed with "invalid date"

 

I wiped the SQL database from the settings page but this has not fixed the problem either. I can't even run a test because nothing happens.

 

Has anyone else experienced this?

Link to comment
23 minutes ago, plantsandbinary said:

Speedtest Tracker just spontaneously stopped working for me on the 20th of July.

 

I haven't rebooted my machine. Haven't installed anything. Haven't changed anything.

Graph literally just shows that after 3am on July 20th every single speedtest has failed with "invalid date"

 

I wiped the SQL database from the settings page but this has not fixed the problem either. I can't even run a test because nothing happens.

 

Has anyone else experienced this?

 

Fixed it by killing the container and reinstalling it entirely. Seems the GDPR acceptance expires after 1 year.

  • Thanks 1
Link to comment

Hi all,

 

I need some help with Focalboard. 

I installed it but it seems to be failing with no DB:

error [2022-07-25 09:58:24.313 +01:00] Database Ping failed                     caller="server/server.go:217" error="unable to open database file: no such file or directory"
fatal [2022-07-25 09:58:24.313 +01:00] server.NewStore ERROR                    caller="main/main.go:145" error="unable to open database file: no such file or directory"

Does anyone know how to get this installed? 

Its a clean install

Link to comment
19 minutes ago, plantsandbinary said:

How do I add multiple subdomains for the Cloudfare-DDNS container?

Pretty sure that one only supports a single one, there's "cloudflareddns" from hotio's repository that suggests it supports multiple. 

Edited by Kilrah
Link to comment
2022-08-01 09:39:01,911 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: AWS plugins 3.1.4 [org.graylog.aws.AWSPlugin]
2022-08-01 09:39:01,912 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: Collector 3.1.4 [org.graylog.plugins.collector.CollectorPlugin]
2022-08-01 09:39:01,912 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: Threat Intelligence Plugin 3.1.4 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2022-08-01 09:39:02,131 INFO : org.graylog2.bootstrap.CmdLineTool - Running with JVM arguments: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=/usr/share/graylog/data/config/log4j2.xml -Djava.library.path=/usr/share/graylog/lib/sigar/ -Dgraylog2.installation_source=docker
2022-08-01 09:39:02,283 INFO : org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final
2022-08-01 09:39:03,404 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:03,414 INFO : org.graylog2.plugin.system.NodeId - No node ID file found. Generated: 7422203b-8cec-41af-aa11-9d23d0495217
2022-08-01 09:39:03,495 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,499 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:03,500 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,550 INFO : org.mongodb.driver.cluster - Cluster created with settings {hosts=[192.168.1.17:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2022-08-01 09:39:03,574 INFO : org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out
2022-08-01 09:39:03,587 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:7}] to 192.168.1.17:27017
2022-08-01 09:39:03,590 INFO : org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=192.168.1.17:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[5, 0, 9]}, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1631230}
2022-08-01 09:39:03,598 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:8}] to 192.168.1.17:27017
2022-08-01 09:39:03,731 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:03,731 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,784 INFO : io.searchbox.client.AbstractJestClient - Setting server pool to a list of 1 servers: [http://192.168.1.17:9200]
2022-08-01 09:39:03,784 INFO : io.searchbox.client.JestClientFactory - Using multi thread/connection supporting pooling connection manager
2022-08-01 09:39:03,817 INFO : io.searchbox.client.JestClientFactory - Using custom ObjectMapper instance
2022-08-01 09:39:03,818 INFO : io.searchbox.client.JestClientFactory - Node Discovery disabled...
2022-08-01 09:39:03,818 INFO : io.searchbox.client.JestClientFactory - Idle connection reaping disabled...
2022-08-01 09:39:03,821 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,845 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,845 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,846 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,892 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,894 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,896 INFO : org.graylog2.shared.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2022-08-01 09:39:03,897 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,897 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,937 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,940 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,954 INFO : org.graylog2.shared.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2022-08-01 09:39:03,954 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,955 INFO : org.graylog2.shared.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2022-08-01 09:39:03,956 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,960 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:03,962 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:03,962 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,054 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,071 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,080 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,081 INFO : org.graylog2.buffers.OutputBuffer - Initialized OutputBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2022-08-01 09:39:04,081 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,083 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,084 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,084 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,085 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,086 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,093 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,094 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,095 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:04,095 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,096 INFO : org.graylog2.shared.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2022-08-01 09:39:04,096 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,097 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,098 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
2022-08-01 09:39:04,099 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,100 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,101 ERROR: org.graylog2.shared.journal.KafkaJournal - Cannot access offset file: Permission denied
2022-08-01 09:39:04,103 ERROR: org.graylog2.bootstrap.CmdLineTool - 

################################################################################

ERROR: Unable to access file /usr/share/graylog/data/journal/graylog2-committed-read-offset: Permission denied

Need help?

* Official documentation: http://docs.graylog.org/
* Community support: https://www.graylog.org/community-support/
* Commercial support: https://www.graylog.com/support/

Terminating. :(

################################################################################


** Press ANY KEY to close this window ** 

 

  • Confused 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.