Jump to content

vakilando

Moderators
  • Posts

    370
  • Joined

  • Last visited

Posts posted by vakilando

  1. Genau,

    1. Such nach Vaultwarden unter Apps und installier den Container.
    2. Gib dem Container eine IP-Adresse, also nicht host bzw. bridge als Netzwerk auswählen sondern br0, br1 usw. jenachdem was du hast.
    3. Dann sperr diese IP auf deinem Router oder Firewall von WWW-Zugriff aus.

    Damit die Anmeldung funktioniert benötigst du allerdings eine HTTPS-Verbindung.

    Ich mach das über einen Reverse Proxy (vorher swag, jetzt npm), allerdings ist mein Vaultwarden auch aus dem WWW erreichbar. 

    Ich schätze, dass dies aber auch ohne reverse proxy geht, habe ich aber nie probiert....

    2FA sollte mit den integrierten Diensten genauso funktionieren.

    Notfallkontakte über einen erreichbaren Mailserver. Allerdings müssen diese Notfallkontakte auch entsprechenden Zugriff auf deine Umgebung haben (lokal oder VPN).

  2. @t34wrj mine still has the executable bit:

    -rwx------ 1 nobody users 32383675 Dec 21  2020 duplicacy_linux_x64_2.7.2

     

    @xenoblade how do the file permissions look like in your postgres appdata folder?

    These are mine:

    root@FloBineDATA:/mnt/user/appdata/postgres-13.3# ls -la
    total 56
    drwx------ 1    999 users   530 May 23 04:11 .
    drwxrwxrwx 1 nobody users   974 May 26 15:39 ..
    -rw------- 1    999   999     3 Jul 31  2021 PG_VERSION
    drwxrwxr-x 1    999  1000     0 May  4 21:31 _BACKUPS_
    drwx------ 1    999   999    60 May  5 21:24 base
    drwx------ 1    999   999   606 May 25 05:00 global
    drwx------ 1    999   999     0 Jul 31  2021 pg_commit_ts
    drwx------ 1    999   999     0 Jul 31  2021 pg_dynshmem
    -rw------- 1    999   999  4782 Jul 31  2021 pg_hba.conf
    -rw------- 1    999   999  1636 Jul 31  2021 pg_ident.conf
    drwx------ 1    999   999    76 May 27 16:23 pg_logical
    drwx------ 1    999   999    28 Jul 31  2021 pg_multixact
    drwx------ 1    999   999     0 Jul 31  2021 pg_notify
    drwx------ 1    999   999     0 Jul 31  2021 pg_replslot
    drwx------ 1    999   999     0 Jul 31  2021 pg_serial
    drwx------ 1    999   999     0 Jul 31  2021 pg_snapshots
    drwx------ 1    999   999     0 May 23 04:11 pg_stat
    drwx------ 1    999   999   118 May 27 16:28 pg_stat_tmp
    drwx------ 1    999   999     8 May 10 23:04 pg_subtrans
    drwx------ 1    999   999     0 Jul 31  2021 pg_tblspc
    drwx------ 1    999   999     0 Jul 31  2021 pg_twophase
    drwx------ 1    999   999   268 May 27 02:13 pg_wal
    drwx------ 1    999   999     8 Jul 31  2021 pg_xact
    -rw------- 1    999   999    88 Jul 31  2021 postgresql.auto.conf
    -rw------- 1    999   999 28097 Jul 31  2021 postgresql.conf
    -rw------- 1    999   999    36 May 23 04:11 postmaster.opts
    -rw------- 1    999   999    94 May 23 04:11 postmaster.pid
    

     

  3. 57 minutes ago, t34wrj said:

    Duplicacy won't backup and generates a 'permission denied' error.

    @t34wrj which container are you using?

    I installed "saspus/duplicacy-web" (Docker Hub URL: https://hub.docker.com/r/saspus/duplicacy-web) and this one "survived" the upgrade to 6.10...

     

    I think because I have those variable set:

       User ID: 99 (Container Variable: USR_ID)
       Group ID: 100 (Container Variable: GRP_ID)

    (It's like using "--user 99:100" in extra parameters)

     

    Thes are the file/directory permissions (all ok):

    root@FloBineDATA:/mnt/user/appdata/duplicacy# ls -la
    drwxrwxrwx 1 root   root      46 Sep 13  2020 .
    drwxrwxrwx 1 nobody users    974 May 26 15:39 ..
    drwxr-xr-x 1 nobody users     18 Sep 13  2020 cache
    drwxrwxrwx 1 nobody users    170 May 27 12:06 config
    drwxrwxrwx 1 nobody users 378432 May 27 15:35 logs
    
    root@FloBineDATA:/mnt/user/appdata/duplicacy# cd config/
    
    root@FloBineDATA:/mnt/user/appdata/duplicacy/config# ls -la
    drwxrwxrwx 1 nobody users   170 May 27 12:06 .
    drwxrwxrwx 1 root   root     46 Sep 13  2020 ..
    drwx------ 1 nobody users   210 Dec 21  2020 bin
    -rw------- 1 nobody users 31103 May 27 12:06 duplicacy.json
    -rw------- 1 nobody users 12209 Jan 13  2020 duplicacy.json_bkp-1
    drwx------ 1 nobody users    18 Jan 12  2020 filters
    -rw------- 1 nobody users  2971 May 10 02:00 licenses.json
    -rw-r--r-- 1 nobody users    33 Jan  2  2020 machine-id
    -rw-r--r-- 1 nobody users   144 Jan  2  2020 settings.json
    drwx------ 1 nobody users    34 Sep 13  2020 stats
    

     

  4. On 2/19/2022 at 9:34 PM, TRusselo said:

    so why is this docker using over a gigabyte of memory?

    its using more than anyother thing on my server....

    same problem here....

     

    I removed all files in appdata and reconfigured the Unraid-API adding this to the extra parameters because it slowed down my unraid server and the unraid-api container always crashed:

    -e JAVA_OPTS="-Xmx2500m"

     

    I also used the hashed url in the Unraid-API configuration and everything seemed to work as I saw my dockers, my vms and the unraid details like array, USBs.

     

    But it still uses more than anyother thing on my server and finally crashes as before I had the "JAVA_OPTS" in the extra parameters (this picture was made before setting the java options!).

    554589317_Bildschirmfotovom2022-05-2712-48-27.png.fc44bc03409144b42f61f4040f4cd60d.png

     

    These are the docker container logs from the new configured unraid-api (create>configure>authenticate>working>crash):

    today at 11:25:24> [email protected] start
    today at 11:25:24> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js
    today at 11:25:24
    today at 11:25:25
    today at 11:25:25 WARN  mode option is deprecated. You can safely remove it from nuxt.config
    today at 11:25:25
    today at 11:25:25(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json.
    today at 11:25:25Update this package.json to use a subpath pattern like "./*".
    today at 11:25:25(Use `node --trace-deprecation ...` to show where the warning was created)
    today at 11:25:26Connected to mqtt broker
    today at 11:25:26Error: ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:26    at Object.openSync (node:fs:585:3)
    today at 11:25:26    at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:26    at updateMQTT (/app/mqtt/index.js:276:30)
    today at 11:25:26    at MqttClient.<anonymous> (/app/mqtt/index.js:49:7)
    today at 11:25:26    at MqttClient.emit (node:events:539:35)
    today at 11:25:26    at MqttClient.emit (node:domain:475:12)
    today at 11:25:26    at Readable.<anonymous> (/app/node_modules/mqtt/lib/client.js:1449:14)
    today at 11:25:26    at Readable.emit (node:events:527:28)
    today at 11:25:26    at Readable.emit (node:domain:475:12)
    today at 11:25:26    at endReadableNT (/app/node_modules/mqtt/node_modules/readable-stream/lib/_stream_readable.js:1010:12) {
    today at 11:25:26  errno: -2,
    today at 11:25:26  syscall: 'open',
    today at 11:25:26  code: 'ENOENT',
    today at 11:25:26  path: 'config/mqttKeys'
    today at 11:25:26}
    today at 11:25:26The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work
    today at 11:25:26
    today at 11:25:26 READY  Server listening on http://0.0.0.0:80
    today at 11:25:26
    today at 11:25:36Error: ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:36    at Object.openSync (node:fs:585:3)
    today at 11:25:36    at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:36    at updateMQTT (/app/mqtt/index.js:276:30)
    today at 11:25:36    at Timeout._onTimeout (/app/mqtt/index.js:308:5)
    today at 11:25:36    at listOnTimeout (node:internal/timers:559:17)
    today at 11:25:36    at processTimers (node:internal/timers:502:7) {
    today at 11:25:36  errno: -2,
    today at 11:25:36  syscall: 'open',
    today at 11:25:36  code: 'ENOENT',
    today at 11:25:36  path: 'config/mqttKeys'
    today at 11:25:36}
    today at 11:25:36The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work
    today at 11:25:46Failed to retrieve config file, creating new.
    today at 11:25:46
    today at 11:25:46 ERROR  ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:46
    today at 11:25:46  at Object.openSync (node:fs:585:3)
    today at 11:25:46  at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:46  at default (api/getServers.js:27:36)
    today at 11:25:46  at call (node_modules/connect/index.js:239:7)
    today at 11:25:46  at next (node_modules/connect/index.js:183:5)
    today at 11:25:46  at next (node_modules/connect/index.js:161:14)
    today at 11:25:46  at next (node_modules/connect/index.js:161:14)
    today at 11:25:46  at SendStream.error (node_modules/serve-static/index.js:121:7)
    today at 11:25:46  at SendStream.emit (node:events:527:28)
    today at 11:25:46  at SendStream.emit (node:domain:475:12)
    today at 11:25:46
    today at 11:25:46Error: ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:46    at Object.openSync (node:fs:585:3)
    today at 11:25:46    at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:46    at updateMQTT (/app/mqtt/index.js:276:30)
    today at 11:25:46    at Timeout._onTimeout (/app/mqtt/index.js:308:5)
    today at 11:25:46    at listOnTimeout (node:internal/timers:559:17)
    today at 11:25:46    at processTimers (node:internal/timers:502:7) {
    today at 11:25:46  errno: -2,
    today at 11:25:46  syscall: 'open',
    today at 11:25:46  code: 'ENOENT',
    today at 11:25:46  path: 'config/mqttKeys'
    today at 11:25:46}
    today at 11:25:46The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work
    today at 11:25:56
    today at 11:25:56 ERROR  ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:56
    today at 11:25:56  at Object.openSync (node:fs:585:3)
    today at 11:25:56  at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:56  at default (api/getServers.js:27:36)
    today at 11:25:56  at call (node_modules/connect/index.js:239:7)
    today at 11:25:56  at next (node_modules/connect/index.js:183:5)
    today at 11:25:56  at next (node_modules/connect/index.js:161:14)
    today at 11:25:56  at next (node_modules/connect/index.js:161:14)
    today at 11:25:56  at SendStream.error (node_modules/serve-static/index.js:121:7)
    today at 11:25:56  at SendStream.emit (node:events:527:28)
    today at 11:25:56  at SendStream.emit (node:domain:475:12)
    today at 11:25:56
    today at 11:25:56Error: ENOENT: no such file or directory, open 'config/mqttKeys'
    today at 11:25:56    at Object.openSync (node:fs:585:3)
    today at 11:25:56    at Proxy.readFileSync (node:fs:453:35)
    today at 11:25:56    at updateMQTT (/app/mqtt/index.js:276:30)
    today at 11:25:56    at Timeout._onTimeout (/app/mqtt/index.js:308:5)
    today at 11:25:56    at listOnTimeout (node:internal/timers:559:17)
    today at 11:25:56    at processTimers (node:internal/timers:502:7) {
    today at 11:25:56  errno: -2,
    today at 11:25:56  syscall: 'open',
    today at 11:25:56  code: 'ENOENT',
    today at 11:25:56  path: 'config/mqttKeys'
    today at 11:25:56}
    today at 11:25:56The secure keys for mqtt may have not been generated, you need to make 1 authenticated request via the API first for this to work
    today at 11:29:29Connected to mqtt broker
    today at 11:29:30Connected to mqtt broker
    today at 11:29:31Connected to mqtt broker
    today at 11:29:33Connected to mqtt broker
    today at 11:29:34Connected to mqtt broker
    today at 11:29:35Connected to mqtt broker
    today at 11:29:36Connected to mqtt broker
    today at 11:29:37Connected to mqtt broker
    today at 11:31:51npm ERR! path /app
    today at 11:31:51npm ERR! command failed
    today at 11:31:51npm ERR! signal SIGTERM
    today at 11:31:51npm ERR! command sh -c cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js
    today at 11:31:51
    today at 11:31:51npm ERR! A complete log of this run can be found in:
    today at 11:31:51npm ERR!     /root/.npm/_logs/2022-05-27T09_25_24_461Z-debug-0.log
    today at 11:31:52
    today at 11:31:52> [email protected] start
    today at 11:31:52> cross-env NUXT_HOST=0.0.0.0 NODE_ENV=production node server/index.js
    today at 11:31:52
    today at 11:31:52
    today at 11:31:52 WARN  mode option is deprecated. You can safely remove it from nuxt.config
    today at 11:31:52
    today at 11:31:52(node:26) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /app/node_modules/@nuxt/components/package.json.
    today at 11:31:52Update this package.json to use a subpath pattern like "./*".
    today at 11:31:52(Use `node --trace-deprecation ...` to show where the warning was created)
    today at 11:31:55Connected to mqtt broker
    today at 11:31:55
    today at 11:31:55 READY  Server listening on http://0.0.0.0:80
    today at 11:31:55
    today at 11:31:58Connected to mqtt broker
    today at 11:31:59Connected to mqtt broker
    today at 11:32:02Connected to mqtt broker
    today at 11:32:03Connected to mqtt broker
    today at 11:32:05Connected to mqtt broker
    today at 11:32:06Connected to mqtt broker
    today at 11:35:20Connected to mqtt broker
    today at 11:35:21Connected to mqtt broker
    today at 11:35:24Connected to mqtt broker
    today at 11:35:25Connected to mqtt broker
    today at 11:38:39Connected to mqtt broker
    today at 11:38:40Connected to mqtt broker
    today at 11:38:42Connected to mqtt broker
    today at 11:38:43Connected to mqtt broker
    today at 11:38:45Connected to mqtt broker
    today at 11:41:59Connected to mqtt broker
    today at 11:42:01Connected to mqtt broker
    today at 11:42:02Connected to mqtt broker
    today at 11:45:16Connected to mqtt broker
    today at 11:45:17Connected to mqtt broker
    today at 11:45:19Connected to mqtt broker
    today at 11:45:20Connected to mqtt broker
    today at 11:48:35Connected to mqtt broker
    today at 11:48:36Connected to mqtt broker
    today at 11:48:38Connected to mqtt broker
    today at 11:48:39Connected to mqtt broker
    today at 11:48:41Connected to mqtt broker
    today at 11:48:42Connected to mqtt broker
    today at 11:50:42
    today at 11:50:42<--- Last few GCs --->
    today at 11:50:42
    today at 11:50:42[26:0x4aaa470]  1130031 ms: Mark-sweep 3995.4 (4140.7) -> 3992.9 (4138.4) MB, 86.6 / 0.1 ms  (average mu = 0.111, current mu = 0.032) allocation failure scavenge might not succeed
    today at 11:50:42[26:0x4aaa470]  1130120 ms: Mark-sweep 3996.0 (4141.3) -> 3993.7 (4139.2) MB, 86.3 / 0.0 ms  (average mu = 0.074, current mu = 0.034) allocation failure scavenge might not succeed
    today at 11:50:42
    today at 11:50:42
    today at 11:50:42<--- JS stacktrace --->
    today at 11:50:42
    today at 11:50:42FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
    today at 11:50:42 1: 0xb09c10 node::Abort() [node]
    today at 11:50:42 2: 0xa1c193 node::FatalError(char const*, char const*) [node]
    today at 11:50:42 3: 0xcf8dbe v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
    today at 11:50:42 4: 0xcf9137 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
    today at 11:50:42 5: 0xeb09d5  [node]
    today at 11:50:42 6: 0xeb14b6  [node]
    today at 11:50:42 7: 0xebf9de  [node]
    today at 11:50:42 8: 0xec0420 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
    today at 11:50:42 9: 0xec339e v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
    today at 11:50:4210: 0xe848da v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
    today at 11:50:4211: 0x11fd626 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
    today at 11:50:4212: 0x15f2099  [node]
    today at 11:50:42Container stopped
  5. I also have encountered these problems after the apgrade to 6.10.

    I have appdata shared via nfs and smb and cannot access or write the vast majority of my docker container data.

     

    See this thread here:

     

    Whether a docker container works/starts properly after installation or not seems to depend on how and if the developer sets the uid:gid used by that container.

    With some containers you can add "--user 99:100" to extra parameters or create the appdata folder with permissions "777" before installing it.

    If you know what uid:gid is used by that container you can also add these like the grafana container that wants "472:root".

    But this - honestly - can't be a final solution....

     

    I don't think anyone has opened a bug report yet, or have you found one?

    Ok, somehow it is also a problem on the side of the container developer, but i see more the way unraid sets the rights as a problem. or do I see it wrong?

     

    • Als lokalen Backup Speicher verwende ich eine Festplatte, die über das Plugin "Unassigned Device" eingebunden ist.
    • Als remote Backup Speicher verwende einen Strato Hidrive Speicher.
    • Als Backupsoftware verwende ich den Docker "Duplicacy" (saspus/duplicacy-web).

    Duplicacy erstellt Backups von allen Daten auf den lokalen Backup Speicher.

    Anschließend kopiert Duplicacy die mir sehr wichtigen Daten verschlüsselt auf meinen remote Backup Speicher (Hidrive).

    Zudem habe ich mir letztens bei Strato einen Hidrive S3 Speicher geholt auf den ich - wenn ich mal Zeit und Muße habe - auch alle nicht ganz so wichtigen Daten sichern möchte.

     

    Funktioniert gut.

    Restore habe ich natürlich auch schon einige male erfolgreich getestet. Sowohl vom bestehenden Duplicacy Docker auf Unraid, als auch von meinem Notebook - falls Unraid tot ist muss dass ja auch gehen ;-) 

  6. I have the same problems with a couple of my docker containers.

    On 5/21/2022 at 8:02 PM, aeleos said:

    I was able to fix this by adding --user 99:100 to extra parameters. You can also fix it by setting the grafana appdata folders to 472:root, which is the user/group the grafana container tries to use (and creates these permission issues)

    Yes, but I'm not sure if this isn't only a temporary fix...?

    Not sure if "--user 99:100 to extra parameters" will work with every container...

     

    I tried to install another instance of grafana and it doesn't even start with default settings.

     

    My grafana container (running):

    root@FloBineDATA:/mnt/user/appdata/grafana# ls -la
    total 13312
    drwxrwxrwx 1 nobody users       76 May 24 19:51 .
    drwxrwxrwx 1 nobody users      928 May 24 19:53 ..
    drwxrwxr-x 1    472 root         2 Dec  2 01:02 alerting
    drwxrwxr-x 1    472 root         0 Jun 14  2021 csv
    -rw-r--r-- 1    472   472 13631488 May 24 19:51 grafana.db
    drwxrwxr-x 1    472   472      276 Aug 22  2021 plugins
    drwxrwxr-x 1    472   472        0 Sep 18  2020 png
    drwxrwxr-x 1    472 root         0 Apr 10 23:34 storage

     

    The new test grafana container does not start:

    root@FloBineDATA:/mnt/user/appdata/grafana-1# ls -la
    total 0
    drwxr-xr-x 1 nobody users   0 May 24 19:53 .
    drwxrwxrwx 1 nobody users 928 May 24 19:53 ..

    The Container log:

    GF_PATHS_DATA='/var/lib/grafana' is not writable.
    You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
    mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
    
    ** Press ANY KEY to close this window ** 

     

    You must add "--user 99:100" to extra parameters OR create the grafana appdata folder with permissions "472:root" before starting the container.

    This can't be a final solution....

     

    Did anybody open a bug report or is it a docker container (maintainer) problem (maintainers must use "--user 99:100" to get the conainter running in Unraid >= 6.10.0)?? 

  7. 6 hours ago, Björn Frommholz said:

    ich habe seit dem Upgrade keinen Zugriff mehr auf das Dashboard.

    Ja das hatte ich auch.

    Unter Einstellungen > Zugangskonfiguration hatte ich unter 6.9.2 den Punkt Benutze TLS/SSL (USE_SSL) auf "auto" eingestellt.

    Mit dieser Einstellung war die Nutzung von https://IP.von.un.raid nach Upgrade auf 6.10.0 nicht mehr möglich. Ich konnte aber sehr wohl auf http://IP.von.un.raid zugreifen, mich anmelden und USE_SSL=yes setzen.

    Dann funktionierte es wieder wie gehabt.

  8. 3 hours ago, mgutt said:

    Ich habe jetzt mal powertop deinstalliert, da es nicht mehr kompatibel zu sein scheint

    @mgutt Bei mir funktioniert der Aufruf weiterhin. Hast du mal probiert es in den Nerd Tools zu deinstallien und erneut zu installieren?

     

     

    On 5/20/2022 at 8:57 AM, heijoni said:

    Meine Windows VMs laufen auch bei mir nicht mehr mit Passthrough (Grafikkarte/Netwerk).

    Hat dieser Fehler eventuell was damit zu tun?

    @heijoni Ich hatte/habe ebenso Problem mit einer Windows VM (mit SeaBIOS und durchgereichter Nvidia 1050TI).

    Die zweite mit OVMF BIOS und selbiger durchgereichter Nvidia läuft weiter wie bisher....

     

  9. I know, it's an old thread.... Just for information:

    I tested the command on my upgraded Unraid (6.9.2 to 6.10.0) and commented out the line in the go file.

    /etc/rc.d/rc.haveged stop
    -bash: /etc/rc.d/rc.haveged: No such file or directory

     

    • Like 1
  10. I have similar gpu passthrough problems with one of my Windows 10 VM after upgrading from 6.9.2 to 6.10

     

    VM 1: not running anymore

    • BIOS: SeaBIOS
    • Chipset: i440fx-4-1
    • vdisk1: SATA
    • vdisk2: VirtIO
    • Graphics: Nvidia 1050TI passthrough
    • BIOS-ROM: yes
    • Sound: Nvidia 1050TI
    • Network: br0.5, virtio-net


    VM 2: still running fine

    • BIOS: OVMF
    • Chipset: i440fx-4-1
    • vdisk1: VirtIO
    • Graphics: Nvidia 1050TI passthrough
    • BIOS-ROM: yes
    • Sound: Nvidia 1050TI
    • Network: br0.5, virtio-net

    With Vm 1 I tried almost everyting.

    • After adding VNC as primary and keeping the nvidia as secondary card it boots up and I can connect with VNC.
      The nvidia card is detected by Windows and I can install nvidia drivers but I have no monitor output.
    • Switching from GPU passthrough to only VNC works: VM boots fine.
    • Switching from SATA to virtIO: VM does not boot.
    • I made a new VM (same settings as the original VM 1)
      1.  using the original vdisk: VM does not boot.
      2. using a new vdisk, installed Windows: VM does not boot.
      3. using a new vdisk and changed the chipset (to i440fx-6-** and Q35-**), installed Windows: VM does not boot
      4. using the original vdisk and changed the chipset (to i440fx-6-** and Q35-**): VM does not boot

    What I realized:

    • Each time I have only the nvidia card as primary card without VNC the machine VM 1 won't even touch the vdisk file.
      The file modification date of the vdisk file does not change, so it seems the vm does not find the disk!
    • Nothings works when using SeaBIOS....? Is this the problem?

     

    I ended up by installing a new VM with BIOS: OVMF, Chipset: i440fx-6-2, vdisk1: virtIO, gpu: passthrough with rom file, Network: br0.5, virtio-net and it worked immediately.

    I will keep the old VM with VNC, because of the data and the software installed till I've migrated everything to the new vm.

    Also I will perhaps try to get gpu passthrough working again on my old VM if someone finds a method how to do this.

     

  11. Es scheint aber eher ein Problem mit dem Passthrough von USB in Bezug auf VMs zu sein?
    Mein Home Assistant Docker hat keine Probleme mit dem durchschleifen des Conbee II Sticks und ich musste nach dem Update von 6.9.2 auf 6.10 auch nichts anpassen.

  12. I'm also still randomly encountering this problem. This issue doesn't seem to be finally solved...

    I have "Allow access to host networks" checked/active.

    My Home Assistant Docker (host network) sometimes looses connection to some other docker containers on different vlans (e.g. ispyagentdvr on custom br0.6 network, motioneye on custom br0.5 network, frigate on custom br1.15 network).

    Stopping and starting the docker service always solves this issue. A reboot of unraid sometimes solves this issue, sometimes it's raising this issue. I have two NICs and four VLANs.

  13. Das ist korrekt, es funktionieren (leider) nur "echte" USB Sticks.

    Mein 4GB Kingston Datatraveler ist schon über 2 Jahre dran - ohne Probleme.

    Auf YouTube gibt es ein Video (von Space invader?) in dem USB Sticks für unRAID getestet/empfohlen werden. Musst mal danach suchen.

  14. Hoi Vakilando
    sorry was meinst du mit UD? Also das andere System ist ein Truenas Server! Ich habe mittels SMB share die Daten auf dem unRaid server transferiert, aber bei dem speed gehts eine Ewigkeit [emoji20]
    UD = unassigned disk
    Eine Festplatte, die unRAID zwar kennt und ansprechen kann aber die nicht im Array ist.
  15. Da es ein offsite Backup eines parity geschützten Arrays ist würde ich es auf eine UD disk des anderen unRAID Systems machen (also nicht von durch eine Parity geschützt).
    Das geht dann flotter...

  16. Hast du den midnight Commander (mc) auf unraid installiert? Ich weiß gerade nicht ob der zum Standard gehört oder ob ich den mal über die nerd tools installiert habe....
    Du könntest mal schauen ob du unter /mnt/User bzw. unter /mnt/disk1 (2,3,...) ein Verzeichnis appdata findest und ob da was drin ist.

    Hast du evtl das mittlerweile (aus Sicherheitsgründen) eingestellte Plugin "Appdata Cleanup" installiert?
    Bei einer Fehlkonfiguration könnte das vielleicht auch so etwas bewirken.

    Eine weitere Option wäre ein falsch konfiguriertes Backup?

  17. Die Dockerdaten liegen in der Freigabe "appdata" auf dem Cache.

    Diese sollten vom Mover nicht angefasst werden und auf dem Cache verbleiben. 

     

    Fragen:

    1. wie hast du die Cache Einstellungen unter der appdata Freigabe bzgl. Mover konfiguriert?
      Es sollte auf only/nur oder prefer/bevorzugt stehen.
    2. wie hast du in den Docker Container Einstellungen die Pfade angegeben?
      Du kannst "/mnt/cache/appdata/bla" oder "/mnt/user/appdata/bla" angeben und landest prinzipell immer an der selben Stelle.

    Wenn du allerdings die Cache Einstellungen z.B. auf yes/ja einstellst, verschiebt der Mover die Daten auf das Array.

    Das ist ziemlich suboptimal, da das Array langsamer ist als der Cache (besonders beim schreiben).

    Wenn du jetzt auch noch in den Docker Container Einstellungen die Pfade mit "/mnt/cache/appdata" angegeben hast, zeigt dies nach dem Mover Lauf auf ein (ggf. fast) leeres Verzeichnis sobald der Mover gelaufen ist, da der Mover alle nicht in Verwendung befindliche Dateien aufs Array (/mnt/user/appdata/bla) verschoben hat die Docker aber auf /mnt/cache/appdata/bla schauen.

     

    Das hätte dann den von dir beschriebenen Effekt. Was anderes fällt mir sonst grad nicht ein.... Prüf das mal.

    • Like 1
×
×
  • Create New...