Kazino43

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by Kazino43

  1. It was „mover“. It didn‘t finish gracefully, thats why it has been stuck. Can someone for the sake of peace post me your result running this first: ls /etc/passw* Possibly you should also have a backup.conf which is called „passwd-„ Are there any differences running: diff /etc/passwd{,-} In mine, all ‚x‘ are substituted with a ‚!‘ instead. Is this normal? Besides this change of one character, everything is the same.
  2. That would make sense, since I was watching the log and was connected. When I left, no additional nginx logs were made. Never noticed that one. Next problem that appeared: I cannot stop the array. I am already in safe mode. I don't know why this all started from today. I tried: and: Why is it now not even unmounting the array? I am not accessing anything. Could please just someone help me. Don't tell my I lost all my Unraid system and don't know if it was "hacked" + eventuall data loss. What is going on today?? :(((
  3. Docker service and VM Manager are disabled, but I still get this one in frequent manner (every 2-8 minutes): Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137972 open socket #4 left in connection 9 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137976 open socket #13 left in connection 10 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137978 open socket #14 left in connection 11 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137980 open socket #25 left in connection 12 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137982 open socket #26 left in connection 13 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137984 open socket #29 left in connection 14 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: aborting Nginx was never downloaded by myself actively, nor did I run a nginx reverse proxy manager. It is poping up although docker services are disabled as mentioned.
  4. This is the log from the start of Unraid, after ca. 5 minutes some avahi logs appear about open port and then the Unraid server runs crazy, exactly after this two entries: Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: *2619 open socket #11 left in connection 7 Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: *2621 open socket #17 left in connection 8 Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: aborting There is no DNS-service and nginx docker running on this server, so I don't get it.
  5. Are you sure @xjumper84? Any updates from @kris_wk? I got the following problem, Google also hints me to your thread. ->
  6. Hi, I have constant, but small I/O on one drive of the array. Unfortunately 'iotop' isn't available in 6.12 because of the missing nerd pack, therefore just made a screenshot with 'top'. As you can see, it seems like gzip and tar are using the CPU heavily. Can I somehow see which apllication/folder they are accessing? I could think of the Appdata-Backup, because I had compression on, but the IO/CPU load is now constant for longer time. Any help is really appreciated! Edit: In combination with this thread: I now got really worried: I get those 'nginx open socket alerts' in combination with avahi-daemon entries: Really don't know how to start with this one, it seems kind of weird and scary, when you don't really understand what the log is mentioning. Edit2: It gets even scarier, I wanted to stop the array, but it is refusing to, see the log: What to do in the first step now? Edit3: //deleted Edit 4: As it seems, some docker what just really miss configured and therfore these entries pop'ed up. The only thing I am unsure is to why gzip and tar was being used. The only explanmation would be because of Appdata-Backup plugin running with the compress-option=true, will monitor with disabled compress-function
  7. I'm getting some warning notifications, even tho the log seems kind of fine. ab.debug.log
  8. Hi, wanted to run telegraf 1.20, because newer versions wanted a newer version of GLIBC. Based on the template from CA I added the following path host:docker -> /bin/sh:/bin/sh for the command: /bin/sh -c 'apk update && apk upgrade && apk add ipmitool && apk add smartmontools && apk add lm_sensors && telegraf' It is now building the docker, but at the end it is stopping at: 'exec /entrypoint.sh: no such file or directory' error Anybody has an idea?
  9. Ob sich das noch lohnt für knapp 300€ nur allein für‘s MoBo?
  10. Just wanted to confirm that seperating those two paths did clear the error on a new backup run. Thanks. I guess for now we just have to remember that. Question is how many user are really having this sort of path mapping. For me at least it was the original way from CA. Might be that many others just change such a mapping directly instead of copying the template 1:1.
  11. Ah, sorry, didn‘t save the edit.
  12. The log seems fine until the end after the back up of the flash drive: [warning] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum.
  13. Sorry, I have to ask once again. I did change my docker from .img to the folder structure, forgot about this container image being loaded manually and not via CA. Therefore in CA, I cannot restore it since the image is missing. Even building this image once again didn't help. Sitting here for almost 3 hours and not being able to solve this. What am I doing wrong? This repo shall be 'dockerized': https://github.com/0a1b/ebay-kleinanzeigen_enhanced My steps: Remove all self-tried container/images via GUI (include removing images) and/or 'docker image rm XXXX' Via the docker compose manager: New Stack Include the following adapted compose file (added local build path with deleted .env and compose.yaml, changed volumes to appdata, added bridge mode for network) services: version: '3' services: tg-bot: build: 'LOCAL/PATH/DOWNLOADED_GIT_REPO_WITHOUT_ENV_AND_COMPOSE_FILE' volumes: - /mnt/user/appdata/kleinanzeigen-crawler/sqlite:/app/jobs.sqlite ports: - "8444:8443" network_mode: bridge adapted from the original compose, which is the following: version: '3' services: tg-bot: build: . env_file: - .env volumes: - ./jobs.sqlite:/app/jobs.sqlite ports: - "8444:8443" and in the docker compose under the current stack copied the original .env with my credentials: # Token for the bot received from @BotFather TG_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXX # Domain name where the bot is hosted, needed for Telegram callbacks. No leading protocol, it should be accessible via # HTTPS # Example: domain.com HOST_URL= # Add non-empty value to enable debug # So far it affects only the mode of running bot, in Debug it's run in "polling" mode while in Production # it uses "webhook" mode. Thus, HOST_URL is not required for Debug. DEBUG=aa Update Stack, which it does exit fine and 'compose up' After that the following error occurs in the docker log: File "/app/main.py", line 16, in <module> 'default': SQLAlchemyJobStore(url='sqlite:///jobs.sqlite') File "/usr/local/lib/python3.8/site-packages/apscheduler/jobstores/sqlalchemy.py", line 60, in __init__ Column('id', Unicode(191, _warn_on_bytestring=False), primary_key=True), File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/sqltypes.py", line 325, in __init__ super().__init__(length=length, **kwargs) TypeError: __init__() got an unexpected keyword argument '_warn_on_bytestring' Really don't know how I did it the last time around, anyone has a clue or can try it out by itself? I am somehow not able to install this docker in any way. It seems like it doesn't install as a docker, but installs the dependencies in the Unraid OS, which is realy bad, isn't it?
  14. Versuche das Thema hier zu verstehen, klappt aber nicht so richtig. Hier möchte man also den Zugriff von außen über Reverse Proxy nur auf ausgewählte Docker beschränken, richtig? Geht das nicht einfach über den NPM, dort die jeweilig gewünschte Subdomain des Dockers ink. dem Port angeben und über Access List zusätzlich sichern? Also so wie hier z.B beschrieben: (Das man evtl. die Unraid GUI nicht über Reverse Proxy zugänglich machen sollte, sollte eigentlich klar sein). Ist an der Vorgehensweise im Video sicherheitstechnisch etwas als kritisch zu bezeichnen? Edit: So wie ich das verstanden habe, soll hier auf alle Docker über die „einfachere/schönere“ custom-domain (anstelle der IP:PORT-Adresse) zugegriffen werden, aber trotzdem nur im lokalen Netzwerk, korrekt? Hat das bis auf die einfachere Adresse sonst noch Vorteile?
  15. Today I had a weird one. I am on 6.12-rc2 after being on 6.9.X for almost a year. In the morning I realized that pihole must be gone beacuse of no DNS solving. Wanted to open the Unraid GUI to look up, it wasn't available. Looked at the router in terms of sudden change of IP adress or something. Nothing changed there, instead the unraid server was off it said. Running down stairs to take a look: PC was running. But neither could I make it display the local terminal nor was I able to just write 'reboot'/'power off' to shutdown safely. Nothing happened after more than 10 mins after the initial command input. I also wan't able to WOL, press the usual power button for graceful shutdown. It simply didn't react at all to any inputs. So I made a hard-reboot via the reset switch. Turned back on nicely, but of course wanted to do parity check first. Since there were no connections to the array from any kind of PC, I don't think I have to do it immediately. I thought there would be a diagnose folder/entry made by Unraid, but it didn't. A) Is there anything I can make and post here to investigate the already occurred problem after having already rebooted? B) Mirroring the log to a Flash drive is the only thing to do now to investigate any kind of repetition of the same problem, right?
  16. Whats the current state of the plugin? I am on 6.12-rc2 and installed the old one, 'v2'. But in the settings I cannot apply any changes I made. The 'apply' button is uncklickable/grey. Tried turning off docker and then change settings, didn't help. Am I missing out on something? Appdata Backup/Restore v2.5 isn't available as a stable version in CA, right? Need to apply urgent changes so that the appdata folder is backed up again.
  17. One more question. After everything is changed, the docker.img used before can be safely deleted, right?
  18. For the docker directory: Should it be associated to a share or to a disk or is it irrelevant? I have the docker.img and appdata in the certain shares on a specific pool. When I want to change from docker.img to file: Should I put the directory of the docker-share or the disk on which the docker-directory will sit in? A) /mnt/disk_pool_name/docker_files B) /mnt/user/system/docker/docker_files Edit: Share of course set to useCache:ONLYdriveA
  19. Hallo, gibt es einen funktionierenden Docker für Unraid, der lediglich einen Browser beinhaltet und dieser Docker die GPU durchschleift (Entlastung für CPU, da viel gerendert wird durch Scrolling etc.). Die typischen Browser-Docker scheinen kein 'dev/dri' zu unterstützen. Auf Dockerhub habe ich folgendes Projekt gefunden: https://hub.docker.com/r/cremuzzi/firefox Und folgende Compose.yml benutzt: services: version: '2' services: firefox: image: cremuzzi/firefox container_name: firefox-with-gpu environment: - DISPLAY_WIDTH=1920 - DISPLAY_HEIGHT=1080 - SECURE_CONNECTION_VNC_METHOD=SSL - KEEP_APP_RUNNING=1 volumes: - '/mnt/user/appdata/firefox-gpu/.Xauthority:/home/firefox/.Xauthority:ro' devices: - '/dev/dri:/dev/dri' ports: - '5800:5800' network_mode: bridge Leider kann ich es nicht zum Laufen bringen, folgender Fehler taucht im Log des Containers auf: Error: no DISPLAY environment variable specified Deswegen habe ich webtop mit 'unbuntu-xfce' laufen, da dies die GPU benutzt und dort der Browser läuft. Eleganter wäre aber natürlich den einen Browser-Docker zu verwenden. Kennt ihr evtl. funktionierende Browser-Docker mit GPU-Unterstützung oder könnt den Fehler in der .yml erknnen? @mgutt 's Antwort in einem anderen Thread erklärte lediglich das Ausweichen auf o.g. webtop. Würde mich über Hilfe freuen, verzweifele gerade etwas.
  20. Hallo an alle, nachdem ich eigentlich Unraid nur kurzzeitig testen wollte, hat sich das Ganze doch etwas gezogen. Im Endeffekt gefällt mir das System, relativ einfach zu bedienen, intuitiv und über die GUI einfach anwenderfreundlicher als andere Systeme. Nun läuft mein (eigentlich für 1-2 Monate geplantes) Testsystem bestehend aus irgendwelchen HDDs (teils sogar per USB angebunden, Asche auf mein Haupt) nun schon min. 1 Jahr und der Speicherplatz wird voll. Zudem kann es so nicht weitergehen, da die Daten weder im System durch Parity-Drives noch physikalisch (USB-Stecker ziehen) geschützt sind. War ja auch nur ein Testsystem. Speicherplatz brauche ich eigentlich nicht viel, hatte nun zusätzlich zu den Data-Disks eine 500GB Cache SSD und jeweils eine separate 250GB SSD für VM/Docker, um wichtige Docker wie iobroker usw. auf einer separaten SSD zu haben. Aufgrund dessen wollte ich nun 3x4TB +2x2TB SSDs als Daten-Disks und 1x4 TB als Parity-Disk verwenden + jeweils 1x SDD für VM/Docker Eine Cache-Disk ist bei einem Only-SSD-System (zumindest bei 1GB-Ethernet) nicht wirklich mehr vorteilhaft, dass Array ist ja schon schnell. Problem: Nun habe ich den gestrigen Tag mit viel Recherchieren verbracht und bin zum Entschluss gekommen, dass das ganze mit dem Only-SSD-Array nicht so einfach sein wird wie ich gedacht habe. Unraid scheint im btrfs- und xfs-Array kein TRIM zu unterstützten. Ohne TRIM => keine SSDs mit voller Nutzbarkeit => blöde Situation Alternativ habe ich gelesen, dass viele ZFS-Pools benutzen, da dort TRIM unterstützt wird. Das wider rum ist (noch) nicht als Haupt-Array nativ möglich (wann kommt das nächste Update, um diese Funktionalität freizuschalten?). Ich möchte beim Haupt-Array auch nicht mit Plugins und Co. arbeiten, da das System möglichst einfach und simpel aufgebaut werden soll. Wenn man beim Haupt-Array schon Plugins benutzt, kann es bei mir nur problematisch werden. Bei ZFS-Array hätte ich aber den eigentlich Grund, weswegen ich mich für Unraid entschieden habe, hingeschmissen. Ist es dort irgendwie möglich, das später mögliche ZFS-Haupt-Array auf eine externe HDD zu kopieren, dann das ZFS-Array aufzulösen bzw. weitere SSDs hinzuzufügen und anschließend wieder die Daten von Ext. HDD auf das neue ZFS-Array zu kopieren? Nun meine Frage: Sollte ich einfach noch mit der Low-Cost-Methode weiterfahren (einfach noch restliche HDDs) anschließen, um weiteren Speicherplatz zu haben und auf ein Update zu warten, dass SSD nativ im Haupt-Array TRIM unterstützen bzw. ZFS (mit entsprechendem Autotrim) für das Haupt-Array freigeschaltet wird? Kann man zeitlich einschätzen, wie lange LimeTech noch für die Freischaltung von Trim in btrfs/xfs benötigt bzw. zur nativen Implementierung von ZFS-Pools als Haupt-Array? Eigentlich gibt es keine weiteren Methode, um only-SSDs per Unraid korrekt laufen zu lassen, oder? Eigentlich schade, das Unraid hier relativ weit der Zeit hinterherhingt. SSDs sind nun mal mittlerweile relativ günstig. Braucht man nicht exorbitante Datenmenege zu speichern, gibt es m.M.n. nichts besseres als SSDs im Array und eine große Back-Up HDD für ein externes Langzeit-Backup, vor allem bei den heutigen Stromkosten.
  21. Jup, 100% correct. It was either NerdPack or GPU-stats. Wasn't noticing the warnings because of the new UI in the right corner. Thanks!! Hope someone else can google it if he's encountering the same issue
  22. Hi, I've got a problem with my dashboard. Nearly every tab works fine, but the dashboard tab/page is empty after updating from 6.8.X to 6.12.0-rc2. Tried different USB ports for the boot drive, didn't change anything. Don't think that this is the error since every other tab and every other function of UnRaid works fine. Anyone has a hint for me? Right now the dashboard just looks like this:
  23. Thanks! Yeah, it's not a perfect solution. Perhaps I am going to try implementing an external database, but I am nowhere near having all the necessary skills for that.
  24. Thanks for your answer! I would want to use the Unraid Docker template. As far as I got, all variables/path in the .env file of the docker compose need to be copied to a Unraid docker template, in this case: TG_TOKEN= HOST_URL= DEBUG= I did that and it works nicely. But there is one problem I am facing: Since the docker container doesn't seem to store it's data in persistency, see main.py: # TODO: re-enable SQLite storage for persistency the docker shouldn't get turned off/paused in Unraid. Otherwise it looses the "last chat_id" and stops working. As I had to find out, "CA Appdata Backup" stops all Containers, therefore also this one, which destroys it's functionality. Is there any way to exclude certain docker containers from being paused by "CA Appdata Backup"? Otherwise this will not work, at least in the current state of the github repo.
  25. Nevermind. Perhaps trying a bit more in terms of Variables would help! Solved it. Is is correct that the .env file for docker compose must be "copied" 1:1 (Paths, Variables, etc) in the unRaid Template?