vakilando

Moderators
  • Posts

    367
  • Joined

  • Last visited

Everything posted by vakilando

  1. Der Wert kann über das Plugin "Tips and Tweaks" verändert werden. Bei mir steht dort unter "Maximale Überwachungen 'fs.inotify.max_user_watches':" 524288. Dies sollte der Standardwert sein,denn ich wüsste nicht, dass ich den mal geändert hätte.
  2. Genau, Such nach Vaultwarden unter Apps und installier den Container. Gib dem Container eine IP-Adresse, also nicht host bzw. bridge als Netzwerk auswählen sondern br0, br1 usw. jenachdem was du hast. Dann sperr diese IP auf deinem Router oder Firewall von WWW-Zugriff aus. Damit die Anmeldung funktioniert benötigst du allerdings eine HTTPS-Verbindung. Ich mach das über einen Reverse Proxy (vorher swag, jetzt npm), allerdings ist mein Vaultwarden auch aus dem WWW erreichbar. Ich schätze, dass dies aber auch ohne reverse proxy geht, habe ich aber nie probiert.... 2FA sollte mit den integrierten Diensten genauso funktionieren. Notfallkontakte über einen erreichbaren Mailserver. Allerdings müssen diese Notfallkontakte auch entsprechenden Zugriff auf deine Umgebung haben (lokal oder VPN).
  3. @xenoblade strange, nearly same permissions and I had no problems with postgres. also pretty sure they are ok....
  4. @t34wrj mine still has the executable bit: -rwx------ 1 nobody users 32383675 Dec 21 2020 duplicacy_linux_x64_2.7.2 @xenoblade how do the file permissions look like in your postgres appdata folder? These are mine: root@FloBineDATA:/mnt/user/appdata/postgres-13.3# ls -la total 56 drwx------ 1 999 users 530 May 23 04:11 . drwxrwxrwx 1 nobody users 974 May 26 15:39 .. -rw------- 1 999 999 3 Jul 31 2021 PG_VERSION drwxrwxr-x 1 999 1000 0 May 4 21:31 _BACKUPS_ drwx------ 1 999 999 60 May 5 21:24 base drwx------ 1 999 999 606 May 25 05:00 global drwx------ 1 999 999 0 Jul 31 2021 pg_commit_ts drwx------ 1 999 999 0 Jul 31 2021 pg_dynshmem -rw------- 1 999 999 4782 Jul 31 2021 pg_hba.conf -rw------- 1 999 999 1636 Jul 31 2021 pg_ident.conf drwx------ 1 999 999 76 May 27 16:23 pg_logical drwx------ 1 999 999 28 Jul 31 2021 pg_multixact drwx------ 1 999 999 0 Jul 31 2021 pg_notify drwx------ 1 999 999 0 Jul 31 2021 pg_replslot drwx------ 1 999 999 0 Jul 31 2021 pg_serial drwx------ 1 999 999 0 Jul 31 2021 pg_snapshots drwx------ 1 999 999 0 May 23 04:11 pg_stat drwx------ 1 999 999 118 May 27 16:28 pg_stat_tmp drwx------ 1 999 999 8 May 10 23:04 pg_subtrans drwx------ 1 999 999 0 Jul 31 2021 pg_tblspc drwx------ 1 999 999 0 Jul 31 2021 pg_twophase drwx------ 1 999 999 268 May 27 02:13 pg_wal drwx------ 1 999 999 8 Jul 31 2021 pg_xact -rw------- 1 999 999 88 Jul 31 2021 postgresql.auto.conf -rw------- 1 999 999 28097 Jul 31 2021 postgresql.conf -rw------- 1 999 999 36 May 23 04:11 postmaster.opts -rw------- 1 999 999 94 May 23 04:11 postmaster.pid
  5. @t34wrj which container are you using? I installed "saspus/duplicacy-web" (Docker Hub URL: https://hub.docker.com/r/saspus/duplicacy-web) and this one "survived" the upgrade to 6.10... I think because I have those variable set: User ID: 99 (Container Variable: USR_ID) Group ID: 100 (Container Variable: GRP_ID) (It's like using "--user 99:100" in extra parameters) Thes are the file/directory permissions (all ok): root@FloBineDATA:/mnt/user/appdata/duplicacy# ls -la drwxrwxrwx 1 root root 46 Sep 13 2020 . drwxrwxrwx 1 nobody users 974 May 26 15:39 .. drwxr-xr-x 1 nobody users 18 Sep 13 2020 cache drwxrwxrwx 1 nobody users 170 May 27 12:06 config drwxrwxrwx 1 nobody users 378432 May 27 15:35 logs root@FloBineDATA:/mnt/user/appdata/duplicacy# cd config/ root@FloBineDATA:/mnt/user/appdata/duplicacy/config# ls -la drwxrwxrwx 1 nobody users 170 May 27 12:06 . drwxrwxrwx 1 root root 46 Sep 13 2020 .. drwx------ 1 nobody users 210 Dec 21 2020 bin -rw------- 1 nobody users 31103 May 27 12:06 duplicacy.json -rw------- 1 nobody users 12209 Jan 13 2020 duplicacy.json_bkp-1 drwx------ 1 nobody users 18 Jan 12 2020 filters -rw------- 1 nobody users 2971 May 10 02:00 licenses.json -rw-r--r-- 1 nobody users 33 Jan 2 2020 machine-id -rw-r--r-- 1 nobody users 144 Jan 2 2020 settings.json drwx------ 1 nobody users 34 Sep 13 2020 stats
  6. same problem here.... I removed all files in appdata and reconfigured the Unraid-API adding this to the extra parameters because it slowed down my unraid server and the unraid-api container always crashed: -e JAVA_OPTS="-Xmx2500m" I also used the hashed url in the Unraid-API configuration and everything seemed to work as I saw my dockers, my vms and the unraid details like array, USBs. But it still uses more than anyother thing on my server and finally crashes as before I had the "JAVA_OPTS" in the extra parameters (this picture was made before setting the java options!). These are the docker container logs from the new configured unraid-api (create>configure>authenticate>working>crash): today at 11:25:24> [email protected] start today at 11:25:24> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:25:24 today at 11:25:25 today at 11:25:25 WARN mode option is deprecated. You can safely remove it from nuxt.config today at 11:25:25 today at 11:25:25(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json. today at 11:25:25Update this package.json to use a subpath pattern like "./*". today at 11:25:25(Use `node --trace-deprecation ...` to show where the warning was created) today at 11:25:26Connected to mqtt broker today at 11:25:26Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:26 at Object.openSync (node:fs:585:3) today at 11:25:26 at Proxy.readFileSync (node:fs:453:35) today at 11:25:26 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:26 at MqttClient.<anonymous> (/app/mqtt/index.js:49:7) today at 11:25:26 at MqttClient.emit (node:events:539:35) today at 11:25:26 at MqttClient.emit (node:domain:475:12) today at 11:25:26 at Readable.<anonymous> (/app/node_modules/mqtt/lib/client.js:1449:14) today at 11:25:26 at Readable.emit (node:events:527:28) today at 11:25:26 at Readable.emit (node:domain:475:12) today at 11:25:26 at endReadableNT (/app/node_modules/mqtt/node_modules/readable-stream/lib/_stream_readable.js:1010:12) { today at 11:25:26 errno: -2, today at 11:25:26 syscall: 'open', today at 11:25:26 code: 'ENOENT', today at 11:25:26 path: 'config/mqttKeys' today at 11:25:26} today at 11:25:26The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:26 today at 11:25:26 READY Server listening on http://0.0.0.0:80 today at 11:25:26 today at 11:25:36Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:36 at Object.openSync (node:fs:585:3) today at 11:25:36 at Proxy.readFileSync (node:fs:453:35) today at 11:25:36 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:36 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:36 at listOnTimeout (node:internal/timers:559:17) today at 11:25:36 at processTimers (node:internal/timers:502:7) { today at 11:25:36 errno: -2, today at 11:25:36 syscall: 'open', today at 11:25:36 code: 'ENOENT', today at 11:25:36 path: 'config/mqttKeys' today at 11:25:36} today at 11:25:36The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:46Failed to retrieve config file, creating new. today at 11:25:46 today at 11:25:46 ERROR ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:46 today at 11:25:46 at Object.openSync (node:fs:585:3) today at 11:25:46 at Proxy.readFileSync (node:fs:453:35) today at 11:25:46 at default (api/getServers.js:27:36) today at 11:25:46 at call (node_modules/connect/index.js:239:7) today at 11:25:46 at next (node_modules/connect/index.js:183:5) today at 11:25:46 at next (node_modules/connect/index.js:161:14) today at 11:25:46 at next (node_modules/connect/index.js:161:14) today at 11:25:46 at SendStream.error (node_modules/serve-static/index.js:121:7) today at 11:25:46 at SendStream.emit (node:events:527:28) today at 11:25:46 at SendStream.emit (node:domain:475:12) today at 11:25:46 today at 11:25:46Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:46 at Object.openSync (node:fs:585:3) today at 11:25:46 at Proxy.readFileSync (node:fs:453:35) today at 11:25:46 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:46 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:46 at listOnTimeout (node:internal/timers:559:17) today at 11:25:46 at processTimers (node:internal/timers:502:7) { today at 11:25:46 errno: -2, today at 11:25:46 syscall: 'open', today at 11:25:46 code: 'ENOENT', today at 11:25:46 path: 'config/mqttKeys' today at 11:25:46} today at 11:25:46The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:25:56 today at 11:25:56 ERROR ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:56 today at 11:25:56 at Object.openSync (node:fs:585:3) today at 11:25:56 at Proxy.readFileSync (node:fs:453:35) today at 11:25:56 at default (api/getServers.js:27:36) today at 11:25:56 at call (node_modules/connect/index.js:239:7) today at 11:25:56 at next (node_modules/connect/index.js:183:5) today at 11:25:56 at next (node_modules/connect/index.js:161:14) today at 11:25:56 at next (node_modules/connect/index.js:161:14) today at 11:25:56 at SendStream.error (node_modules/serve-static/index.js:121:7) today at 11:25:56 at SendStream.emit (node:events:527:28) today at 11:25:56 at SendStream.emit (node:domain:475:12) today at 11:25:56 today at 11:25:56Error: ENOENT: no such file or directory, open 'config/mqttKeys' today at 11:25:56 at Object.openSync (node:fs:585:3) today at 11:25:56 at Proxy.readFileSync (node:fs:453:35) today at 11:25:56 at updateMQTT (/app/mqtt/index.js:276:30) today at 11:25:56 at Timeout._onTimeout (/app/mqtt/index.js:308:5) today at 11:25:56 at listOnTimeout (node:internal/timers:559:17) today at 11:25:56 at processTimers (node:internal/timers:502:7) { today at 11:25:56 errno: -2, today at 11:25:56 syscall: 'open', today at 11:25:56 code: 'ENOENT', today at 11:25:56 path: 'config/mqttKeys' today at 11:25:56} today at 11:25:56The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work today at 11:29:29Connected to mqtt broker today at 11:29:30Connected to mqtt broker today at 11:29:31Connected to mqtt broker today at 11:29:33Connected to mqtt broker today at 11:29:34Connected to mqtt broker today at 11:29:35Connected to mqtt broker today at 11:29:36Connected to mqtt broker today at 11:29:37Connected to mqtt broker today at 11:31:51npm ERR! path /app today at 11:31:51npm ERR! command failed today at 11:31:51npm ERR! signal SIGTERM today at 11:31:51npm ERR! command sh -c cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:31:51 today at 11:31:51npm ERR! A complete log of this run can be found in: today at 11:31:51npm ERR! /root/.npm/_logs/2022-05-27T09_25_24_461Z-debug-0.log today at 11:31:52 today at 11:31:52> [email protected] start today at 11:31:52> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js today at 11:31:52 today at 11:31:52 today at 11:31:52 WARN mode option is deprecated. You can safely remove it from nuxt.config today at 11:31:52 today at 11:31:52(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json. today at 11:31:52Update this package.json to use a subpath pattern like "./*". today at 11:31:52(Use `node --trace-deprecation ...` to show where the warning was created) today at 11:31:55Connected to mqtt broker today at 11:31:55 today at 11:31:55 READY Server listening on http://0.0.0.0:80 today at 11:31:55 today at 11:31:58Connected to mqtt broker today at 11:31:59Connected to mqtt broker today at 11:32:02Connected to mqtt broker today at 11:32:03Connected to mqtt broker today at 11:32:05Connected to mqtt broker today at 11:32:06Connected to mqtt broker today at 11:35:20Connected to mqtt broker today at 11:35:21Connected to mqtt broker today at 11:35:24Connected to mqtt broker today at 11:35:25Connected to mqtt broker today at 11:38:39Connected to mqtt broker today at 11:38:40Connected to mqtt broker today at 11:38:42Connected to mqtt broker today at 11:38:43Connected to mqtt broker today at 11:38:45Connected to mqtt broker today at 11:41:59Connected to mqtt broker today at 11:42:01Connected to mqtt broker today at 11:42:02Connected to mqtt broker today at 11:45:16Connected to mqtt broker today at 11:45:17Connected to mqtt broker today at 11:45:19Connected to mqtt broker today at 11:45:20Connected to mqtt broker today at 11:48:35Connected to mqtt broker today at 11:48:36Connected to mqtt broker today at 11:48:38Connected to mqtt broker today at 11:48:39Connected to mqtt broker today at 11:48:41Connected to mqtt broker today at 11:48:42Connected to mqtt broker today at 11:50:42 today at 11:50:42<--- Last few GCs ---> today at 11:50:42 today at 11:50:42[26:0x4aaa470] 1130031 ms: Mark-sweep 3995.4 (4140.7) -> 3992.9 (4138.4) MB, 86.6 / 0.1 ms (average mu = 0.111, current mu = 0.032) allocation failure scavenge might not succeed today at 11:50:42[26:0x4aaa470] 1130120 ms: Mark-sweep 3996.0 (4141.3) -> 3993.7 (4139.2) MB, 86.3 / 0.0 ms (average mu = 0.074, current mu = 0.034) allocation failure scavenge might not succeed today at 11:50:42 today at 11:50:42 today at 11:50:42<--- JS stacktrace ---> today at 11:50:42 today at 11:50:42FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory today at 11:50:42 1: 0xb09c10 node::Abort() [node] today at 11:50:42 2: 0xa1c193 node::FatalError(char const*, char const*) [node] today at 11:50:42 3: 0xcf8dbe v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node] today at 11:50:42 4: 0xcf9137 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node] today at 11:50:42 5: 0xeb09d5 [node] today at 11:50:42 6: 0xeb14b6 [node] today at 11:50:42 7: 0xebf9de [node] today at 11:50:42 8: 0xec0420 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node] today at 11:50:42 9: 0xec339e v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node] today at 11:50:4210: 0xe848da v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node] today at 11:50:4211: 0x11fd626 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node] today at 11:50:4212: 0x15f2099 [node] today at 11:50:42Container stopped
  7. I also have encountered these problems after the apgrade to 6.10. I have appdata shared via nfs and smb and cannot access or write the vast majority of my docker container data. See this thread here: Whether a docker container works/starts properly after installation or not seems to depend on how and if the developer sets the uid:gid used by that container. With some containers you can add "--user 99:100" to extra parameters or create the appdata folder with permissions "777" before installing it. If you know what uid:gid is used by that container you can also add these like the grafana container that wants "472:root". But this - honestly - can't be a final solution.... I don't think anyone has opened a bug report yet, or have you found one? Ok, somehow it is also a problem on the side of the container developer, but i see more the way unraid sets the rights as a problem. or do I see it wrong?
  8. +1! Yes, I also miss the colored logs. What is the reason for this"design decision"? Finding errors was very easy and fast. Please bring it back!
  9. Als lokalen Backup Speicher verwende ich eine Festplatte, die über das Plugin "Unassigned Device" eingebunden ist. Als remote Backup Speicher verwende einen Strato Hidrive Speicher. Als Backupsoftware verwende ich den Docker "Duplicacy" (saspus/duplicacy-web). Duplicacy erstellt Backups von allen Daten auf den lokalen Backup Speicher. Anschließend kopiert Duplicacy die mir sehr wichtigen Daten verschlüsselt auf meinen remote Backup Speicher (Hidrive). Zudem habe ich mir letztens bei Strato einen Hidrive S3 Speicher geholt auf den ich - wenn ich mal Zeit und Muße habe - auch alle nicht ganz so wichtigen Daten sichern möchte. Funktioniert gut. Restore habe ich natürlich auch schon einige male erfolgreich getestet. Sowohl vom bestehenden Duplicacy Docker auf Unraid, als auch von meinem Notebook - falls Unraid tot ist muss dass ja auch gehen
  10. @mikl isn't it mandatory to ALSO add "--user 99:100" to extra parameters when using your "chmod/chown" script? ...because otherwise the docker (like grafana thats using "472:root") can't access his files anymore?
  11. I have the same problems with a couple of my docker containers. Yes, but I'm not sure if this isn't only a temporary fix...? Not sure if "--user 99:100 to extra parameters" will work with every container... I tried to install another instance of grafana and it doesn't even start with default settings. My grafana container (running): root@FloBineDATA:/mnt/user/appdata/grafana# ls -la total 13312 drwxrwxrwx 1 nobody users 76 May 24 19:51 . drwxrwxrwx 1 nobody users 928 May 24 19:53 .. drwxrwxr-x 1 472 root 2 Dec 2 01:02 alerting drwxrwxr-x 1 472 root 0 Jun 14 2021 csv -rw-r--r-- 1 472 472 13631488 May 24 19:51 grafana.db drwxrwxr-x 1 472 472 276 Aug 22 2021 plugins drwxrwxr-x 1 472 472 0 Sep 18 2020 png drwxrwxr-x 1 472 root 0 Apr 10 23:34 storage The new test grafana container does not start: root@FloBineDATA:/mnt/user/appdata/grafana-1# ls -la total 0 drwxr-xr-x 1 nobody users 0 May 24 19:53 . drwxrwxrwx 1 nobody users 928 May 24 19:53 .. The Container log: GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied ** Press ANY KEY to close this window ** You must add "--user 99:100" to extra parameters OR create the grafana appdata folder with permissions "472:root" before starting the container. This can't be a final solution.... Did anybody open a bug report or is it a docker container (maintainer) problem (maintainers must use "--user 99:100" to get the conainter running in Unraid >= 6.10.0)??
  12. Ja das hatte ich auch. Unter Einstellungen > Zugangskonfiguration hatte ich unter 6.9.2 den Punkt Benutze TLS/SSL (USE_SSL) auf "auto" eingestellt. Mit dieser Einstellung war die Nutzung von https://IP.von.un.raid nach Upgrade auf 6.10.0 nicht mehr möglich. Ich konnte aber sehr wohl auf http://IP.von.un.raid zugreifen, mich anmelden und USE_SSL=yes setzen. Dann funktionierte es wieder wie gehabt.
  13. Ok, it seems to be fixed for me. I rebooted several times, updated unriad to 6.10.1 ans everything (shim network) works as expected. Note: I realized that unraid does not create the shim network after recovering (rebooting) from an unraid crash. I still don't know exactly why it crashes...but my raspi (with pivccu homematic CCU3) crashes at the same time. I have the presumption, that it's my raspi crashing first and unraid crashes because of the raspi. They are on the same socket strip... Investigating......... So this bug report can be closed again!
  14. @mgutt Bei mir funktioniert der Aufruf weiterhin. Hast du mal probiert es in den Nerd Tools zu deinstallien und erneut zu installieren? @heijoni Ich hatte/habe ebenso Problem mit einer Windows VM (mit SeaBIOS und durchgereichter Nvidia 1050TI). Die zweite mit OVMF BIOS und selbiger durchgereichter Nvidia läuft weiter wie bisher....
  15. I know, it's an old thread.... Just for information: I tested the command on my upgraded Unraid (6.9.2 to 6.10.0) and commented out the line in the go file. /etc/rc.d/rc.haveged stop -bash: /etc/rc.d/rc.haveged: No such file or directory
  16. I have similar gpu passthrough problems with one of my Windows 10 VM after upgrading from 6.9.2 to 6.10 VM 1: not running anymore BIOS: SeaBIOS Chipset: i440fx-4-1 vdisk1: SATA vdisk2: VirtIO Graphics: Nvidia 1050TI passthrough BIOS-ROM: yes Sound: Nvidia 1050TI Network: br0.5, virtio-net VM 2: still running fine BIOS: OVMF Chipset: i440fx-4-1 vdisk1: VirtIO Graphics: Nvidia 1050TI passthrough BIOS-ROM: yes Sound: Nvidia 1050TI Network: br0.5, virtio-net With Vm 1 I tried almost everyting. After adding VNC as primary and keeping the nvidia as secondary card it boots up and I can connect with VNC. The nvidia card is detected by Windows and I can install nvidia drivers but I have no monitor output. Switching from GPU passthrough to only VNC works: VM boots fine. Switching from SATA to virtIO: VM does not boot. I made a new VM (same settings as the original VM 1) 1. using the original vdisk: VM does not boot. 2. using a new vdisk, installed Windows: VM does not boot. 3. using a new vdisk and changed the chipset (to i440fx-6-** and Q35-**), installed Windows: VM does not boot 4. using the original vdisk and changed the chipset (to i440fx-6-** and Q35-**): VM does not boot What I realized: Each time I have only the nvidia card as primary card without VNC the machine VM 1 won't even touch the vdisk file. The file modification date of the vdisk file does not change, so it seems the vm does not find the disk! Nothings works when using SeaBIOS....? Is this the problem? I ended up by installing a new VM with BIOS: OVMF, Chipset: i440fx-6-2, vdisk1: virtIO, gpu: passthrough with rom file, Network: br0.5, virtio-net and it worked immediately. I will keep the old VM with VNC, because of the data and the software installed till I've migrated everything to the new vm. Also I will perhaps try to get gpu passthrough working again on my old VM if someone finds a method how to do this.
  17. Yes, just made the checks. Still working!
  18. Es scheint aber eher ein Problem mit dem Passthrough von USB in Bezug auf VMs zu sein? Mein Home Assistant Docker hat keine Probleme mit dem durchschleifen des Conbee II Sticks und ich musste nach dem Update von 6.9.2 auf 6.10 auch nichts anpassen.
  19. hmm, my Unraid Server has static IPs, all also of my dockers and vms Oh wow, I didn't now that unRAID 6.10 was released!! Fantastic! I just upgraded (flawlessly). Booted two times since then without any problems. I will test again later and report here.
  20. ok, just reopened this bug report with some more information: ...and I found this bug report concerning unraid 6.10-RC3: Does somebody know if it occurrs in the latest 6.10 RC?
  21. Have you tried a newer RC. Does this solve this annoying issue? (I have the same issue in 6.9.2....and just reopend a bug report)
  22. Ok, I have better informations now. I know what happens but still don't know the cause... I am on 6.9.2 and also randomly encounter the problem to loose connection from host to some docker containers mostly after an reboot of unraid. Sometimes this issue aslo comes out of the blue. I don't know exactly when it appears on my running Unraid Server (out of the blue) because I may realise this some days after it appeared... But I can imagine that it may somtimes happen after a automatic backup of appdate with the plugin "CA appdata backup/restore V2" because this plugin stops and resatrs the running docker container. Last time it happend: Yesterday. Probably at 1:00 AM. My server just rebooted out of the blue because of another problem (I'm investigating...) After this: no shim networks. Resolved today at ~08 AM (see attached log) My relevant configuration: I have Network: two NICs and four VLANs. Docker: "Allow access to host networks" checked/active. Dockers and VMs in those VLANs (br.01, br0.5, br0.6, br1.5, br1.16) A Home Assistant Docker (host network) that looses connection to some other docker containers on different vlans (e.g. ispyagentdvr on custom br0.6 network, motioneye on custom br0.5 network, frigate on custom br1.15 network). This raises this issue: Reboot of unraid: sometimes Running unraid: sometimes (because of plugin "CA appdata backup/restore V2"??) This workaround solves this issue temporary: Always: Stopp docker service, de-/reactivate "Allow access to host networks", restart docker service Sometimes: Reboot of unraid I didn't try manually readding the shim networks but in this post "shim-br0-networs-in-unraid" it seems to be a possible workaround: So the problem are the shim networks!? They sometimes aren't set at boot. (Why?) They sometimes get lost. (Why?) What are shim networks?: shim networks should be created when the Docker setting "Host access to custom networks" is enabled. This allows Unraid to talk directly with docker containers, which are using macvlan (custom) networks. But those shim networks are not allway created after reboot! So it's still a NOT solved bug: What worries me, is that this is a bug that seems to persist in Unraid 6.10-rc3: Perhaps a user-script could detect missing shim networks and readding them? Any ideas or hints?? Please see the pictures and the log I attached. Before stopping docker service. After de-/reactivating "Allow access to host networks" followed by restarting docker service See the (commented) log file: syslog_2022-05-18_crash-at-01-AM-no-shim-networks-after-reboot_fix-at-08-AM.log
  23. I'm also still randomly encountering this problem. This issue doesn't seem to be finally solved... I have "Allow access to host networks" checked/active. My Home Assistant Docker (host network) sometimes looses connection to some other docker containers on different vlans (e.g. ispyagentdvr on custom br0.6 network, motioneye on custom br0.5 network, frigate on custom br1.15 network). Stopping and starting the docker service always solves this issue. A reboot of unraid sometimes solves this issue, sometimes it's raising this issue. I have two NICs and four VLANs.