Kazino43

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kazino43's Achievements

Noob

Noob (1/14)

2

Reputation

  1. It was „mover“. It didn‘t finish gracefully, thats why it has been stuck. Can someone for the sake of peace post me your result running this first: ls /etc/passw* Possibly you should also have a backup.conf which is called „passwd-„ Are there any differences running: diff /etc/passwd{,-} In mine, all ‚x‘ are substituted with a ‚!‘ instead. Is this normal? Besides this change of one character, everything is the same.
  2. That would make sense, since I was watching the log and was connected. When I left, no additional nginx logs were made. Never noticed that one. Next problem that appeared: I cannot stop the array. I am already in safe mode. I don't know why this all started from today. I tried: and: Why is it now not even unmounting the array? I am not accessing anything. Could please just someone help me. Don't tell my I lost all my Unraid system and don't know if it was "hacked" + eventuall data loss. What is going on today?? :(((
  3. Docker service and VM Manager are disabled, but I still get this one in frequent manner (every 2-8 minutes): Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137972 open socket #4 left in connection 9 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137976 open socket #13 left in connection 10 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137978 open socket #14 left in connection 11 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137980 open socket #25 left in connection 12 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137982 open socket #26 left in connection 13 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: *137984 open socket #29 left in connection 14 Apr 22 18:01:26 Tower nginx: 2023/04/22 18:01:26 [alert] 29181#29181: aborting Nginx was never downloaded by myself actively, nor did I run a nginx reverse proxy manager. It is poping up although docker services are disabled as mentioned.
  4. This is the log from the start of Unraid, after ca. 5 minutes some avahi logs appear about open port and then the Unraid server runs crazy, exactly after this two entries: Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: *2619 open socket #11 left in connection 7 Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: *2621 open socket #17 left in connection 8 Apr 22 11:32:05 Tower nginx: 2023/04/22 11:32:05 [alert] 19732#19732: aborting There is no DNS-service and nginx docker running on this server, so I don't get it.
  5. Are you sure @xjumper84? Any updates from @kris_wk? I got the following problem, Google also hints me to your thread. ->
  6. Hi, I have constant, but small I/O on one drive of the array. Unfortunately 'iotop' isn't available in 6.12 because of the missing nerd pack, therefore just made a screenshot with 'top'. As you can see, it seems like gzip and tar are using the CPU heavily. Can I somehow see which apllication/folder they are accessing? I could think of the Appdata-Backup, because I had compression on, but the IO/CPU load is now constant for longer time. Any help is really appreciated! Edit: In combination with this thread: I now got really worried: I get those 'nginx open socket alerts' in combination with avahi-daemon entries: Really don't know how to start with this one, it seems kind of weird and scary, when you don't really understand what the log is mentioning. Edit2: It gets even scarier, I wanted to stop the array, but it is refusing to, see the log: What to do in the first step now? Edit3: //deleted Edit 4: As it seems, some docker what just really miss configured and therfore these entries pop'ed up. The only thing I am unsure is to why gzip and tar was being used. The only explanmation would be because of Appdata-Backup plugin running with the compress-option=true, will monitor with disabled compress-function
  7. I'm getting some warning notifications, even tho the log seems kind of fine. ab.debug.log
  8. Hi, wanted to run telegraf 1.20, because newer versions wanted a newer version of GLIBC. Based on the template from CA I added the following path host:docker -> /bin/sh:/bin/sh for the command: /bin/sh -c 'apk update && apk upgrade && apk add ipmitool && apk add smartmontools && apk add lm_sensors && telegraf' It is now building the docker, but at the end it is stopping at: 'exec /entrypoint.sh: no such file or directory' error Anybody has an idea?
  9. Ob sich das noch lohnt für knapp 300€ nur allein für‘s MoBo?
  10. Just wanted to confirm that seperating those two paths did clear the error on a new backup run. Thanks. I guess for now we just have to remember that. Question is how many user are really having this sort of path mapping. For me at least it was the original way from CA. Might be that many others just change such a mapping directly instead of copying the template 1:1.
  11. Ah, sorry, didn‘t save the edit.
  12. The log seems fine until the end after the back up of the flash drive: [warning] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum.
  13. Sorry, I have to ask once again. I did change my docker from .img to the folder structure, forgot about this container image being loaded manually and not via CA. Therefore in CA, I cannot restore it since the image is missing. Even building this image once again didn't help. Sitting here for almost 3 hours and not being able to solve this. What am I doing wrong? This repo shall be 'dockerized': https://github.com/0a1b/ebay-kleinanzeigen_enhanced My steps: Remove all self-tried container/images via GUI (include removing images) and/or 'docker image rm XXXX' Via the docker compose manager: New Stack Include the following adapted compose file (added local build path with deleted .env and compose.yaml, changed volumes to appdata, added bridge mode for network) services: version: '3' services: tg-bot: build: 'LOCAL/PATH/DOWNLOADED_GIT_REPO_WITHOUT_ENV_AND_COMPOSE_FILE' volumes: - /mnt/user/appdata/kleinanzeigen-crawler/sqlite:/app/jobs.sqlite ports: - "8444:8443" network_mode: bridge adapted from the original compose, which is the following: version: '3' services: tg-bot: build: . env_file: - .env volumes: - ./jobs.sqlite:/app/jobs.sqlite ports: - "8444:8443" and in the docker compose under the current stack copied the original .env with my credentials: # Token for the bot received from @BotFather TG_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXX # Domain name where the bot is hosted, needed for Telegram callbacks. No leading protocol, it should be accessible via # HTTPS # Example: domain.com HOST_URL= # Add non-empty value to enable debug # So far it affects only the mode of running bot, in Debug it's run in "polling" mode while in Production # it uses "webhook" mode. Thus, HOST_URL is not required for Debug. DEBUG=aa Update Stack, which it does exit fine and 'compose up' After that the following error occurs in the docker log: File "/app/main.py", line 16, in <module> 'default': SQLAlchemyJobStore(url='sqlite:///jobs.sqlite') File "/usr/local/lib/python3.8/site-packages/apscheduler/jobstores/sqlalchemy.py", line 60, in __init__ Column('id', Unicode(191, _warn_on_bytestring=False), primary_key=True), File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/sqltypes.py", line 325, in __init__ super().__init__(length=length, **kwargs) TypeError: __init__() got an unexpected keyword argument '_warn_on_bytestring' Really don't know how I did it the last time around, anyone has a clue or can try it out by itself? I am somehow not able to install this docker in any way. It seems like it doesn't install as a docker, but installs the dependencies in the Unraid OS, which is realy bad, isn't it?
  14. Versuche das Thema hier zu verstehen, klappt aber nicht so richtig. Hier möchte man also den Zugriff von außen über Reverse Proxy nur auf ausgewählte Docker beschränken, richtig? Geht das nicht einfach über den NPM, dort die jeweilig gewünschte Subdomain des Dockers ink. dem Port angeben und über Access List zusätzlich sichern? Also so wie hier z.B beschrieben: (Das man evtl. die Unraid GUI nicht über Reverse Proxy zugänglich machen sollte, sollte eigentlich klar sein). Ist an der Vorgehensweise im Video sicherheitstechnisch etwas als kritisch zu bezeichnen? Edit: So wie ich das verstanden habe, soll hier auf alle Docker über die „einfachere/schönere“ custom-domain (anstelle der IP:PORT-Adresse) zugegriffen werden, aber trotzdem nur im lokalen Netzwerk, korrekt? Hat das bis auf die einfachere Adresse sonst noch Vorteile?
  15. Today I had a weird one. I am on 6.12-rc2 after being on 6.9.X for almost a year. In the morning I realized that pihole must be gone beacuse of no DNS solving. Wanted to open the Unraid GUI to look up, it wasn't available. Looked at the router in terms of sudden change of IP adress or something. Nothing changed there, instead the unraid server was off it said. Running down stairs to take a look: PC was running. But neither could I make it display the local terminal nor was I able to just write 'reboot'/'power off' to shutdown safely. Nothing happened after more than 10 mins after the initial command input. I also wan't able to WOL, press the usual power button for graceful shutdown. It simply didn't react at all to any inputs. So I made a hard-reboot via the reset switch. Turned back on nicely, but of course wanted to do parity check first. Since there were no connections to the array from any kind of PC, I don't think I have to do it immediately. I thought there would be a diagnose folder/entry made by Unraid, but it didn't. A) Is there anything I can make and post here to investigate the already occurred problem after having already rebooted? B) Mirroring the log to a Flash drive is the only thing to do now to investigate any kind of repetition of the same problem, right?