vakilando

Moderators
  • Posts

    293
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

vakilando's Achievements

Contributor

Contributor (5/14)

58

Reputation

1

Community Answers

  1. Nur mal so eingeworfen: Ginge auch ein USB 3 Port > USB 2 Kabel > USB 3 Stick ?
  2. Der Wert kann über das Plugin "Tips and Tweaks" verändert werden. Bei mir steht dort unter "Maximale Überwachungen 'fs.inotify.max_user_watches':" 524288. Dies sollte der Standardwert sein,denn ich wüsste nicht, dass ich den mal geändert hätte.
  3. Genau, Such nach Vaultwarden unter Apps und installier den Container. Gib dem Container eine IP-Adresse, also nicht host bzw. bridge als Netzwerk auswählen sondern br0, br1 usw. jenachdem was du hast. Dann sperr diese IP auf deinem Router oder Firewall von WWW-Zugriff aus. Damit die Anmeldung funktioniert benötigst du allerdings eine HTTPS-Verbindung. Ich mach das über einen Reverse Proxy (vorher swag, jetzt npm), allerdings ist mein Vaultwarden auch aus dem WWW erreichbar. Ich schätze, dass dies aber auch ohne reverse proxy geht, habe ich aber nie probiert.... 2FA sollte mit den integrierten Diensten genauso funktionieren. Notfallkontakte über einen erreichbaren Mailserver. Allerdings müssen diese Notfallkontakte auch entsprechenden Zugriff auf deine Umgebung haben (lokal oder VPN).
  4. @xenoblade strange, nearly same permissions and I had no problems with postgres. also pretty sure they are ok....
  5. @t34wrj mine still has the executable bit: -rwx------ 1 nobody users 32383675 Dec 21 2020 duplicacy_linux_x64_2.7.2 @xenoblade how do the file permissions look like in your postgres appdata folder? These are mine: root@FloBineDATA:/mnt/user/appdata/postgres-13.3# ls -la total 56 drwx------ 1 999 users 530 May 23 04:11 . drwxrwxrwx 1 nobody users 974 May 26 15:39 .. -rw------- 1 999 999 3 Jul 31 2021 PG_VERSION drwxrwxr-x 1 999 1000 0 May 4 21:31 _BACKUPS_ drwx------ 1 999 999 60 May 5 21:24 base drwx------ 1 999 999 606 May 25 05:00 global drwx------ 1 999 999 0 Jul 31 2021 pg_commit_ts drwx------ 1 999 999 0 Jul 31 2021 pg_dynshmem -rw------- 1 999 999 4782 Jul 31 2021 pg_hba.conf -rw------- 1 999 999 1636 Jul 31 2021 pg_ident.conf drwx------ 1 999 999 76 May 27 16:23 pg_logical drwx------ 1 999 999 28 Jul 31 2021 pg_multixact drwx------ 1 999 999 0 Jul 31 2021 pg_notify drwx------ 1 999 999 0 Jul 31 2021 pg_replslot drwx------ 1 999 999 0 Jul 31 2021 pg_serial drwx------ 1 999 999 0 Jul 31 2021 pg_snapshots drwx------ 1 999 999 0 May 23 04:11 pg_stat drwx------ 1 999 999 118 May 27 16:28 pg_stat_tmp drwx------ 1 999 999 8 May 10 23:04 pg_subtrans drwx------ 1 999 999 0 Jul 31 2021 pg_tblspc drwx------ 1 999 999 0 Jul 31 2021 pg_twophase drwx------ 1 999 999 268 May 27 02:13 pg_wal drwx------ 1 999 999 8 Jul 31 2021 pg_xact -rw------- 1 999 999 88 Jul 31 2021 postgresql.auto.conf -rw------- 1 999 999 28097 Jul 31 2021 postgresql.conf -rw------- 1 999 999 36 May 23 04:11 postmaster.opts -rw------- 1 999 999 94 May 23 04:11 postmaster.pid
  6. @t34wrj which container are you using? I installed "saspus/duplicacy-web" (Docker Hub URL: https://hub.docker.com/r/saspus/duplicacy-web) and this one "survived" the upgrade to 6.10... I think because I have those variable set: User ID: 99 (Container Variable: USR_ID) Group ID: 100 (Container Variable: GRP_ID) (It's like using "--user 99:100" in extra parameters) Thes are the file/directory permissions (all ok): root@FloBineDATA:/mnt/user/appdata/duplicacy# ls -la drwxrwxrwx 1 root root 46 Sep 13 2020 . drwxrwxrwx 1 nobody users 974 May 26 15:39 .. drwxr-xr-x 1 nobody users 18 Sep 13 2020 cache drwxrwxrwx 1 nobody users 170 May 27 12:06 config drwxrwxrwx 1 nobody users 378432 May 27 15:35 logs root@FloBineDATA:/mnt/user/appdata/duplicacy# cd config/ root@FloBineDATA:/mnt/user/appdata/duplicacy/config# ls -la drwxrwxrwx 1 nobody users 170 May 27 12:06 . drwxrwxrwx 1 root root 46 Sep 13 2020 .. drwx------ 1 nobody users 210 Dec 21 2020 bin -rw------- 1 nobody users 31103 May 27 12:06 duplicacy.json -rw------- 1 nobody users 12209 Jan 13 2020 duplicacy.json_bkp-1 drwx------ 1 nobody users 18 Jan 12 2020 filters -rw------- 1 nobody users 2971 May 10 02:00 licenses.json -rw-r--r-- 1 nobody users 33 Jan 2 2020 machine-id -rw-r--r-- 1 nobody users 144 Jan 2 2020 settings.json drwx------ 1 nobody users 34 Sep 13 2020 stats
  7. same problem here.... I removed all files in appdata and reconfigured the Unraid-API adding this to the extra parameters because it slowed down my unraid server and the unraid-api container always crashed: -e JAVA_OPTS="-Xmx2500m" I also used the hashed url in the Unraid-API configuration and everything seemed to work as I saw my dockers, my vms and the unraid details like array, USBs. But it still uses more than anyother thing on my server and finally crashes as before I had the "JAVA_OPTS" in the extra parameters (this picture was made before setting the java options!). These are the docker container logs from the new configured unraid-api (create>configure>authenticate>working>crash): today at 11:25:24> unraidapi@0.5.0 start today at 11:25:24> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:25:24 today at 11:25:25 today at 11:25:25 WARN mode option is deprecated. You can safely remove it from nuxt.config today at 11:25:25 today at 11:25:25(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json. today at 11:25:25Update this package.json to use a subpath pattern like "./*". today at 11:25:25(Use `node --trace-deprecation ...` to show where the warning was created) today at 11:25:26Connected to mqtt broker today at 11:25:26Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:26 at Object.openSync (node:fs:585:3) today at 11:25:26 at Proxy.readFileSync (node:fs:453:35) today at 11:25:26 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:26 at MqttClient.<anonymous> (/app/mqtt/index.js:49:7) today at 11:25:26 at MqttClient.emit (node:events:539:35) today at 11:25:26 at MqttClient.emit (node:domain:475:12) today at 11:25:26 at Readable.<anonymous> (/app/node_modules/mqtt/lib/client.js:1449:14) today at 11:25:26 at Readable.emit (node:events:527:28) today at 11:25:26 at Readable.emit (node:domain:475:12) today at 11:25:26 at endReadableNT (/app/node_modules/mqtt/node_modules/readable-stream/lib/_stream_readable.js:1010:12) { today at 11:25:26 errno: -2, today at 11:25:26 syscall: 'open', today at 11:25:26 code: 'ENOENT', today at 11:25:26 path: 'config/mqttKeys' today at 11:25:26} today at 11:25:26The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:26 today at 11:25:26 READY Server listening on http://0.0.0.0:80 today at 11:25:26 today at 11:25:36Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:36 at Object.openSync (node:fs:585:3) today at 11:25:36 at Proxy.readFileSync (node:fs:453:35) today at 11:25:36 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:36 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:36 at listOnTimeout (node:internal/timers:559:17) today at 11:25:36 at processTimers (node:internal/timers:502:7) { today at 11:25:36 errno: -2, today at 11:25:36 syscall: 'open', today at 11:25:36 code: 'ENOENT', today at 11:25:36 path: 'config/mqttKeys' today at 11:25:36} today at 11:25:36The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:46Failed to retrieve config file, creating new. today at 11:25:46 today at 11:25:46 ERROR ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:46 today at 11:25:46 at Object.openSync (node:fs:585:3) today at 11:25:46 at Proxy.readFileSync (node:fs:453:35) today at 11:25:46 at default (api/getServers.js:27:36) today at 11:25:46 at call (node_modules/connect/index.js:239:7) today at 11:25:46 at next (node_modules/connect/index.js:183:5) today at 11:25:46 at next (node_modules/connect/index.js:161:14) today at 11:25:46 at next (node_modules/connect/index.js:161:14) today at 11:25:46 at SendStream.error (node_modules/serve-static/index.js:121:7) today at 11:25:46 at SendStream.emit (node:events:527:28) today at 11:25:46 at SendStream.emit (node:domain:475:12) today at 11:25:46 today at 11:25:46Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:46 at Object.openSync (node:fs:585:3) today at 11:25:46 at Proxy.readFileSync (node:fs:453:35) today at 11:25:46 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:46 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:46 at listOnTimeout (node:internal/timers:559:17) today at 11:25:46 at processTimers (node:internal/timers:502:7) { today at 11:25:46 errno: -2, today at 11:25:46 syscall: 'open', today at 11:25:46 code: 'ENOENT', today at 11:25:46 path: 'config/mqttKeys' today at 11:25:46} today at 11:25:46The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:56 today at 11:25:56 ERROR ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:56 today at 11:25:56 at Object.openSync (node:fs:585:3) today at 11:25:56 at Proxy.readFileSync (node:fs:453:35) today at 11:25:56 at default (api/getServers.js:27:36) today at 11:25:56 at call (node_modules/connect/index.js:239:7) today at 11:25:56 at next (node_modules/connect/index.js:183:5) today at 11:25:56 at next (node_modules/connect/index.js:161:14) today at 11:25:56 at next (node_modules/connect/index.js:161:14) today at 11:25:56 at SendStream.error (node_modules/serve-static/index.js:121:7) today at 11:25:56 at SendStream.emit (node:events:527:28) today at 11:25:56 at SendStream.emit (node:domain:475:12) today at 11:25:56 today at 11:25:56Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:56 at Object.openSync (node:fs:585:3) today at 11:25:56 at Proxy.readFileSync (node:fs:453:35) today at 11:25:56 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:56 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:56 at listOnTimeout (node:internal/timers:559:17) today at 11:25:56 at processTimers (node:internal/timers:502:7) { today at 11:25:56 errno: -2, today at 11:25:56 syscall: 'open', today at 11:25:56 code: 'ENOENT', today at 11:25:56 path: 'config/mqttKeys' today at 11:25:56} today at 11:25:56The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:29:29Connected to mqtt broker today at 11:29:30Connected to mqtt broker today at 11:29:31Connected to mqtt broker today at 11:29:33Connected to mqtt broker today at 11:29:34Connected to mqtt broker today at 11:29:35Connected to mqtt broker today at 11:29:36Connected to mqtt broker today at 11:29:37Connected to mqtt broker today at 11:31:51npm ERR! path /app today at 11:31:51npm ERR! command failed today at 11:31:51npm ERR! signal SIGTERM today at 11:31:51npm ERR! command sh -c cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:31:51 today at 11:31:51npm ERR! A complete log of this run can be found in: today at 11:31:51npm ERR! /root/.npm/_logs/2022-05-27T09_25_24_461Z-debug-0.log today at 11:31:52 today at 11:31:52> unraidapi@0.5.0 start today at 11:31:52> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:31:52 today at 11:31:52 today at 11:31:52 WARN mode option is deprecated. You can safely remove it from nuxt.config today at 11:31:52 today at 11:31:52(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json. today at 11:31:52Update this package.json to use a subpath pattern like "./*". today at 11:31:52(Use `node --trace-deprecation ...` to show where the warning was created) today at 11:31:55Connected to mqtt broker today at 11:31:55 today at 11:31:55 READY Server listening on http://0.0.0.0:80 today at 11:31:55 today at 11:31:58Connected to mqtt broker today at 11:31:59Connected to mqtt broker today at 11:32:02Connected to mqtt broker today at 11:32:03Connected to mqtt broker today at 11:32:05Connected to mqtt broker today at 11:32:06Connected to mqtt broker today at 11:35:20Connected to mqtt broker today at 11:35:21Connected to mqtt broker today at 11:35:24Connected to mqtt broker today at 11:35:25Connected to mqtt broker today at 11:38:39Connected to mqtt broker today at 11:38:40Connected to mqtt broker today at 11:38:42Connected to mqtt broker today at 11:38:43Connected to mqtt broker today at 11:38:45Connected to mqtt broker today at 11:41:59Connected to mqtt broker today at 11:42:01Connected to mqtt broker today at 11:42:02Connected to mqtt broker today at 11:45:16Connected to mqtt broker today at 11:45:17Connected to mqtt broker today at 11:45:19Connected to mqtt broker today at 11:45:20Connected to mqtt broker today at 11:48:35Connected to mqtt broker today at 11:48:36Connected to mqtt broker today at 11:48:38Connected to mqtt broker today at 11:48:39Connected to mqtt broker today at 11:48:41Connected to mqtt broker today at 11:48:42Connected to mqtt broker today at 11:50:42 today at 11:50:42<--- Last few GCs ---> today at 11:50:42 today at 11:50:42[26:0x4aaa470] 1130031 ms: Mark-sweep 3995.4 (4140.7) -> 3992.9 (4138.4) MB, 86.6 / 0.1 ms (average mu = 0.111, current mu = 0.032) allocation failure scavenge might not succeed today at 11:50:42[26:0x4aaa470] 1130120 ms: Mark-sweep 3996.0 (4141.3) -> 3993.7 (4139.2) MB, 86.3 / 0.0 ms (average mu = 0.074, current mu = 0.034) allocation failure scavenge might not succeed today at 11:50:42 today at 11:50:42 today at 11:50:42<--- JS stacktrace ---> today at 11:50:42 today at 11:50:42FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory today at 11:50:42 1: 0xb09c10 node::Abort() [node] today at 11:50:42 2: 0xa1c193 node::FatalError(char const*, char const*) [node] today at 11:50:42 3: 0xcf8dbe v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node] today at 11:50:42 4: 0xcf9137 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node] today at 11:50:42 5: 0xeb09d5 [node] today at 11:50:42 6: 0xeb14b6 [node] today at 11:50:42 7: 0xebf9de [node] today at 11:50:42 8: 0xec0420 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node] today at 11:50:42 9: 0xec339e v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node] today at 11:50:4210: 0xe848da v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node] today at 11:50:4211: 0x11fd626 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node] today at 11:50:4212: 0x15f2099 [node] today at 11:50:42Container stopped
  8. I also have encountered these problems after the apgrade to 6.10. I have appdata shared via nfs and smb and cannot access or write the vast majority of my docker container data. See this thread here: Whether a docker container works/starts properly after installation or not seems to depend on how and if the developer sets the uid:gid used by that container. With some containers you can add "--user 99:100" to extra parameters or create the appdata folder with permissions "777" before installing it. If you know what uid:gid is used by that container you can also add these like the grafana container that wants "472:root". But this - honestly - can't be a final solution.... I don't think anyone has opened a bug report yet, or have you found one? Ok, somehow it is also a problem on the side of the container developer, but i see more the way unraid sets the rights as a problem. or do I see it wrong?
  9. +1! Yes, I also miss the colored logs. What is the reason for this"design decision"? Finding errors was very easy and fast. Please bring it back!
  10. Als lokalen Backup Speicher verwende ich eine Festplatte, die über das Plugin "Unassigned Device" eingebunden ist. Als remote Backup Speicher verwende einen Strato Hidrive Speicher. Als Backupsoftware verwende ich den Docker "Duplicacy" (saspus/duplicacy-web). Duplicacy erstellt Backups von allen Daten auf den lokalen Backup Speicher. Anschließend kopiert Duplicacy die mir sehr wichtigen Daten verschlüsselt auf meinen remote Backup Speicher (Hidrive). Zudem habe ich mir letztens bei Strato einen Hidrive S3 Speicher geholt auf den ich - wenn ich mal Zeit und Muße habe - auch alle nicht ganz so wichtigen Daten sichern möchte. Funktioniert gut. Restore habe ich natürlich auch schon einige male erfolgreich getestet. Sowohl vom bestehenden Duplicacy Docker auf Unraid, als auch von meinem Notebook - falls Unraid tot ist muss dass ja auch gehen
  11. @mikl isn't it mandatory to ALSO add "--user 99:100" to extra parameters when using your "chmod/chown" script? ...because otherwise the docker (like grafana thats using "472:root") can't access his files anymore?
  12. I have the same problems with a couple of my docker containers. Yes, but I'm not sure if this isn't only a temporary fix...? Not sure if "--user 99:100 to extra parameters" will work with every container... I tried to install another instance of grafana and it doesn't even start with default settings. My grafana container (running): root@FloBineDATA:/mnt/user/appdata/grafana# ls -la total 13312 drwxrwxrwx 1 nobody users 76 May 24 19:51 . drwxrwxrwx 1 nobody users 928 May 24 19:53 .. drwxrwxr-x 1 472 root 2 Dec 2 01:02 alerting drwxrwxr-x 1 472 root 0 Jun 14 2021 csv -rw-r--r-- 1 472 472 13631488 May 24 19:51 grafana.db drwxrwxr-x 1 472 472 276 Aug 22 2021 plugins drwxrwxr-x 1 472 472 0 Sep 18 2020 png drwxrwxr-x 1 472 root 0 Apr 10 23:34 storage The new test grafana container does not start: root@FloBineDATA:/mnt/user/appdata/grafana-1# ls -la total 0 drwxr-xr-x 1 nobody users 0 May 24 19:53 . drwxrwxrwx 1 nobody users 928 May 24 19:53 .. The Container log: GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied ** Press ANY KEY to close this window ** You must add "--user 99:100" to extra parameters OR create the grafana appdata folder with permissions "472:root" before starting the container. This can't be a final solution.... Did anybody open a bug report or is it a docker container (maintainer) problem (maintainers must use "--user 99:100" to get the conainter running in Unraid >= 6.10.0)??
  13. Ja das hatte ich auch. Unter Einstellungen > Zugangskonfiguration hatte ich unter 6.9.2 den Punkt Benutze TLS/SSL (USE_SSL) auf "auto" eingestellt. Mit dieser Einstellung war die Nutzung von https://IP.von.un.raid nach Upgrade auf 6.10.0 nicht mehr möglich. Ich konnte aber sehr wohl auf http://IP.von.un.raid zugreifen, mich anmelden und USE_SSL=yes setzen. Dann funktionierte es wieder wie gehabt.
  14. Ok, it seems to be fixed for me. I rebooted several times, updated unriad to 6.10.1 ans everything (shim network) works as expected. Note: I realized that unraid does not create the shim network after recovering (rebooting) from an unraid crash. I still don't know exactly why it crashes...but my raspi (with pivccu homematic CCU3) crashes at the same time. I have the presumption, that it's my raspi crashing first and unraid crashes because of the raspi. They are on the same socket strip... Investigating......... So this bug report can be closed again!