arch1mede

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by arch1mede

  1. I tried this, it did not work for me. Sadly.
  2. Hi all, I actually went to linuxserver.io channel, they said to come here, then when I mentioned what was said to come back here, they made mention to using the #other-support where no one else responded. Just wanted to provide a small update. I still do not have this working. Some kind soul took pity on me and as a test found that he too was having the same issue BUT he was able to resolve his issue, I tried his fix and this did not fix my issue. I tried the reset pass method that someone else pointed to with no effect. I was told the issue was the fact that some of the dockers in the container were resetting all of the time, right now the kasm.api docker is resetting 1 time a min. I tried to redeploy using /mnt/user/appdata/kasmdata path, then was told that was a fuse mount issue to try /mnt/cache/appdata/kasmdata, this also did not work. I tried to deploy using /mnt/disk2/kasm, same behavior I tried PUID 99 and PGUID 100, same behavior In all cases the kasm.api docker keeps resetting Executing /usr/bin/kasm_server.so Received config /opt/kasm/current/conf/app/api.app.config.yaml Using parameters --enable-admin-api --enable-client-api --enable-public-api cheroot/__init__.py:13: UserWarning: Module cheroot was already imported from /tmp/_MEIjFKZpX/cheroot/__init__.pyc, but /tmp/_MEIjFKZpX/cheroot-8.6.0-py3.8.egg is being added to sys.path cheroot/__init__.py:13: UserWarning: Module more_itertools was already imported from /tmp/_MEIjFKZpX/more_itertools/__init__.pyc, but /tmp/_MEIjFKZpX/more_itertools-8.12.0-py3.8.egg is being added to sys.path cherrypy/__init__.py:112: UserWarning: Module cherrypy was already imported from /tmp/_MEIjFKZpX/cherrypy/__init__.pyc, but /tmp/_MEIjFKZpX/CherryPy-18.1.1-py3.8.egg is being added to sys.path cherrypy/__init__.py:112: UserWarning: Module portend was already imported from /tmp/_MEIjFKZpX/portend.pyc, but /tmp/_MEIjFKZpX/portend-2.6-py3.8.egg is being added to sys.path cherrypy/__init__.py:112: UserWarning: Module tempora was already imported from /tmp/_MEIjFKZpX/tempora/__init__.pyc, but /tmp/_MEIjFKZpX/tempora-5.0.1-py3.8.egg is being added to sys.path 2022-10-28 13:00:48,054 [INFO] root: Performing Database Connectivity Test 2022-10-28 13:00:48,578 [INFO] root: Added Log Handler 2022-10-28 13:00:53,442 [INFO] admin_api_server: AdminApi initialized 2022-10-28 13:00:53,500 [DEBUG] admin_api_server: Provider Manager Initialized 2022-10-28 13:00:53,525 [DEBUG] client_api_server: Provider Manager Initialized Terminated In each case I get login failed and can never access kasm. I cannot get any help from anyone so I have lost hope. I don't have anything special on my unraid, its almost default except with some addons for the gui, it is running 6.10.0 though as I cannot get the latest version to actually run correctly, so I rolled it back to this version.
  3. I would post this in the container sub-section but I am unable to find it. Has anyone installed this docker container? I have it installed but cannot log into it as none of the credentials work. I have tried '[email protected]' and '[email protected]' with a password during the install wizard but I am getting invalid login. Does anyone else have any suggestions?
  4. Anyways, have there been any other reports of Unraid locking up?
  5. Using this from the webterm, this system has no monitor but has an IPMI. I can see that the screen is blanked due to when I rolled this back to a known working version, 6.10.0.
  6. setterm --blank 0 setterm: terminal xterm-256color does not support ---blank
  7. Ever since going to the latest version, Unraid has become unresponsive, IPMI shows nothing on the screen and I have to reboot it to show anything. Anyone else have this same behavior? it seems that the screen is/was blanked, anyone know how to disable this? setterm does not support --blank
  8. For me md1 was mounting but showing up like: d?????????????? so it would have been good to know that there was an issue with the mount point.
  9. Yes I realize that but not sure how I am supposed to know there is an issue if I have to check multiple places.
  10. I just wanted to report, this is still present in the latest 6.9.2 Jul 5 09:47:41 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:41 [alert] 8435#8435: worker process 18731 exited on signal 6 Jul 5 09:47:43 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:43 [alert] 8435#8435: worker process 18756 exited on signal 6 Jul 5 09:47:45 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:45 [alert] 8435#8435: worker process 18801 exited on signal 6 Jul 5 09:47:47 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:47 [alert] 8435#8435: worker process 18828 exited on signal 6 I was checking on my parity check and noticed my logs filled with this. I only had 2 windows open, main one and a systems log. Ran the following: killall --quiet --older-than 1w process_name This seemed to have solved the issue.
  11. I actually figured out the issue, unknown to me, md1 was spitting out xfs errors even though the main page/dashboard showed everything green. As it happens, the lancache-bundle docker has a user setting that pointed to md1 which was really not accessible so wouldn't start. Rebooted the unraid box resolved the issue but not really happy with that solution as I shouldn't have needed to reboot it. As a result, md1 had a xfs error on it so I had to run a parity check on the whole array to resolve the issue, I may still have to put the array into maintenance mode and do a repair but thought id share how this was resolved.
  12. In my experience the VM solution is slower, besides I have this configured to use its own IP so there shouldn't be any port conflicts. This worked before the most recent update.
  13. Recently the docker updated but now refuses to run, all I see now is just an error, anyone have any ideas how to get this running again?
  14. OK so I finally found the solution for this consoleblank=0 cat /sys/module/kernel/parameters/consoleblank should now reflect 0
  15. anyone know how to disable the console screen blanking out, I have already tried setterm --blank 0, cat /sys/module/kernel/parameters/consoleblank and its not 0, still saying 900. The instructions I found were for 6.8.3 so something must have changes for 6.9.0
  16. I have no idea...and i'm not sure why others haven't run into this same issue. I went to the support github and there was nothing in issues and I was starting to suspect that 6.9 is the cause. Maybe its a combination of that docker and version? If it happens again, I will need to downgrade to 6.8.3.
  17. I had this VERY same docker installed and then all of a sudden, my unraid server started acting weird. First time it locked up was a week ago, just became unresponsive, 2nd time it started to degrade, dashboard stopped displaying anything, docker page stopped displaying anything, stop/start nginx did not resolve anything and the web terminal started saying bad proxy so I just removed that docker. I have been running dockers for YEARS and have never had a docker effect a server like this.
  18. Same issue here, installed macinthebox docker, while following the vid, pressed the notifier and it immediately told me to run the helper script, Pressed it and it said to run the VM and it wasn't there. This is a brand new 6.9.0 install.