enJOyIT

Members
  • Posts

    90
  • Joined

  • Last visited

Everything posted by enJOyIT

  1. It was announced weeks before, so you knew what was going on. You had the choice and you obv. missed it because you did nothing. So it was your fault not to act.
  2. I'm sorry to hear that. But you shouldn't blame limetech for your unread E-Mails. Maybe there is a possibility in your special case to make an exception. You should try to contact @SpencerJ. Maybe he can help you out.
  3. You should have received an email informing you of the upcoming price change.
  4. If all your array drives where nvme/ssd drives you would have a lot more writing speed because the slow down are coming from the HDD read heads. And they are moving all around to get the bits for calculating the new parity bits. But it's highly not recommended because of missing trim support. So the answer is: It's a physical problem which limetech never can fix for HDD drives.
  5. Sorry you are talking b****** Buy your licence now and you will receive updates forever und free as they are currently advertised. The news is out for a week now and you can still buy the old licence for the old price. What's the point to wait? For what? To complain afterwards, that the lifetime licence is now more expensive?!
  6. You're absolutely right 🙂 Copy/Paste lost it 😁
  7. Hi, try that extra parameter: --hostname='SteamHeadless' --add-host='SteamHeadless:127.0.0.1' --restart='unless-stopped' --shm-size='2G' --ipc='host' -v '/tmp/.X11-unix/':'/tmp/.X11-unix/':'rw' -v '/tmp/tmp/pulse/':'/tmp/tmp/pulse/':'rw' --ulimit='nofile=1024:524288' --device='/dev/fuse' --device='/dev/uinput' --device-cgroup-rule='c 13:* rmw' --cap-add='NET_ADMIN' --cap-add='SYS_ADMIN' --cap-add='SYS_NICE' For me it's running fine with that.
  8. Hi, I started to encrypt my drives and wanted to backup the luks header with your script. Can I do this? Because in step 2 there are a lot "DO NOT USE" infos 😄 Are there any problems running the script? Did anyone made a restore which worked? Thanks!
  9. One short question,,, what's about the access.log files? Are they deleted after a while? I'm afraid that they will blow up after some time.
  10. Reply to myself 🙂 I think I finally found the root cause... I had too much memory allocated to my proxmox VMs in total.... A bit confusing, since this setup ran for months without any issues... But I dug a bit deeper in proxmox and came to that error: root@pve:~# cat /var/log/syslog | grep oom 2023-12-04T21:45:11.083522+01:00 pve kernel: [3490554.611772] CPU 2/KVM invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 2023-12-04T21:45:11.193598+01:00 pve kernel: [3490554.611817] oom_kill_process+0x10d/0x1c0 2023-12-04T21:45:11.195148+01:00 pve kernel: [3490554.612109] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T21:45:11.195619+01:00 pve kernel: [3490554.612361] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=qemu.slice,mems_allowed=0,global_oom,task_memcg=/qemu.slice/215.scope So no unraid issue! 🙂
  11. Yeah I know it's not officially supported. I think the issue has something to do with zfs! Because my 6.12.5 is still running (since 1 1/2 hour). So maybe bare metal is affected, too! Testing bare metal is currently not possible... Maybe unraid will support VM snapshots... then I'm going to move bare metal.
  12. Hey, running unraid for months without issues. Updated to 6.12.6 right after it was released (coming form 6.12.5). I don't know if it is related to the update, but I didn't change any other thing except unraid. It's an really weird issue, because unraid just turns off without any logging. For example I did an docker update and within downloading unraid just stops and the VM is off and proxmox doesn't show anything... But it's not related to a docker update, because if I run the server and just do nothing (except the docker-apps are running) it turns off, too. This happens within 5 to 15 minutes. Is it possible that there is an issue regarding zfs, because my appdata/docker filesystem is zfs? I'm turned back to 6.12.5 and will check if the same behaviour is still present. Edit: 6.12.5 is now running for 30 minutes without issues. Keep tracking... unraid-diagnostics-20231205-0909.zip
  13. I can't get the server working for weeks now. It worked before, but suddenly it stopped... I can't get into WebGui. It has something to do with xorg: 2023-11-05 11:28:20,213 INFO spawned: 'x11vnc' with pid 1627 2023-11-05 11:28:21,215 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:21,215 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:21,218 WARN exited: desktop (exit status 11; not expected) 2023-11-05 11:28:22,220 INFO spawned: 'desktop' with pid 1650 2023-11-05 11:28:23,230 INFO success: desktop entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:23,230 INFO reaped unknown pid 1656 (exit status 0) 2023-11-05 11:28:23,684 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:24,687 INFO spawned: 'xorg' with pid 1690 2023-11-05 11:28:25,689 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:34,714 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:35,716 INFO spawned: 'xorg' with pid 1835 2023-11-05 11:28:36,718 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:45,741 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:46,744 INFO spawned: 'xorg' with pid 1980 2023-11-05 11:28:47,746 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:50,492 WARN exited: sunshine (exit status 11; not expected) 2023-11-05 11:28:50,492 WARN exited: x11vnc (exit status 11; not expected) 2023-11-05 11:28:50,494 INFO spawned: 'x11vnc' with pid 2030 2023-11-05 11:28:50,496 INFO spawned: 'sunshine' with pid 2032 2023-11-05 11:28:51,497 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:51,497 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) my docker run: docker run -d --name='steam-headless' --net='eth1' --ip='192.168.20.115' --privileged=true -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="unraid" -e HOST_CONTAINERNAME="steam-headless" -e 'USER_PASSWORD'='xxxx' -e 'TZ'='Europe/Berlin' -e 'USER_LOCALES'='de_DE.UTF-8 UTF-8' -e 'WEB_UI_MODE'='vnc' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-9155e6ad-cdc3-137e-786a-3f45292a5ceb' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'DISPLAY'=':55' -e 'MODE'='primary' -e 'PORT_NOVNC_WEB'='8083' -e 'ENABLE_VNC_AUDIO'='false' -e 'ENABLE_EVDEV_INPUTS'='false' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8083]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/Josh5/docker-steam-headless/master/images/steam-icon.png' -v '/mnt/docker/appdata/steam-headless':'/home/default':'rw' -v '/mnt/cache/games/':'/mnt/games':'rw' --hostname='SteamHeadless' --add-host='SteamHeadless:127.0.0.1' --restart=unless-stopped --shm-size=2G --ipc="host" -v '/tmp/.X11-unix':'/tmp/.X11-unix':'rw' -v '/tmp/tmp/pulse':'/tmp/tmp/pulse':'rw' -v '/dev/input':'/dev/input':'ro' --ulimit nofile=1024:524288 --runtime=nvidia 'josh5/steam-headless:latest' 03b1acf60cdca9bc0620d407d1f298b90165138459f1a706ae63e2a230194add I tried with/with out dummy plug, Display ID 0,1,55 ... Same behaviour On my unraid host: root@unraid:/tmp/.X11-unix# ls run/ Is there something missing here? I don't know what I can do else?! Maybe somebody has an idea? Thanks! P.S. GPU is an RTX A2000
  14. Hi, I'm using unraid in the exact way you are planning in point 1. So I can say that it is absolutely reliable to run unraid in this configuration. On the long run I want to switch over to unraid completely because I want to get off this extra layer. But at the moment VM snapshotting in proxmox is far better so I'll have to wait 🙂
  15. Todays update broke my container: 2023-09-29 12:26:48,845 WARN exited: sunshine (exit status 11; not expected) 2023-09-29 12:26:48,846 INFO spawned: 'sunshine' with pid 1942 2023-09-29 12:26:49,855 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:26:49,856 INFO success: desktop entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:26:49,856 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:26:49,856 INFO reaped unknown pid 1954 (exit status 0) 2023-09-29 12:26:57,080 WARN exited: xorg (exit status 11; not expected) 2023-09-29 12:26:58,082 INFO spawned: 'xorg' with pid 2078 2023-09-29 12:26:59,083 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:08,104 WARN exited: xorg (exit status 11; not expected) 2023-09-29 12:27:08,106 INFO spawned: 'xorg' with pid 2211 2023-09-29 12:27:09,107 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:18,129 WARN exited: xorg (exit status 11; not expected) 2023-09-29 12:27:19,110 INFO spawned: 'xorg' with pid 2350 2023-09-29 12:27:19,111 WARN exited: x11vnc (exit status 11; not expected) 2023-09-29 12:27:19,111 WARN exited: desktop (exit status 11; not expected) 2023-09-29 12:27:19,111 WARN exited: sunshine (exit status 11; not expected) 2023-09-29 12:27:19,112 INFO spawned: 'x11vnc' with pid 2351 2023-09-29 12:27:19,114 INFO spawned: 'desktop' with pid 2352 2023-09-29 12:27:19,116 INFO spawned: 'sunshine' with pid 2353 2023-09-29 12:27:20,125 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:20,126 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:20,126 INFO success: desktop entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:20,126 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-09-29 12:27:20,126 INFO reaped unknown pid 2368 (exit status 0) Gui is not available... What's wrong here?
  16. Strange issue here... I click on "Play", it's checking Cloud Status and immediately stops and the "Play" Button reappears... Game obv. won't start. I am using a Nvidia A2000 Are there any log files I can check? The docker logs don't show any useful?! Any thoughts? Edit: Reinstalling the game worked...
  17. Are VM Snapshots on that list too? 🙃
  18. Well, you are right! Maybe it is an idea to put this information into the "question mark" text.
  19. Hi, I can't enable the option exclusive share: I checked every drive for a folder called games but there aren't so this can't be the problem. What I am missing? Thanks!
  20. Hallo, ich hab nun irgendwo gelesen, dass das Dateisystem zfs in Kombination mit docker, bei dem die Images nicht in einer Datei liegen, sondern im Verzeichnis eventuell Probleme macht. Wohl weil zfs jede Änderung irgendwie wegschreibt!? Gibt es hier eine Empfehlung? Ich habe bei docker die Verzeichnis Option gewählt und bin nun verunsichert ob das so gut ist. Danke!
  21. Ja, das stimmt... plex läuft nach einem Neustart auch wieder. Vorher hat es sich komplett aufgehangen... Und wieso taucht hier keine Port-Zuweisung mehr auf? Ich meine da war vorher der 32400er Port ersichtlich und eben die IP von plex. Kannst jemand vielleicht bei sich schauen ob bei ihm dort auch nichts steht? Danke!
  22. Hab den selben Fehler aus dem nichts bekommen... Wo kommt der so plötzlich her? Es lief vorher Monate ohne Probleme und ich habe den container nicht geupdated. Der Fehler kam einfach von jetzt auf gleich. Irgendwie hab ich mit unraid 6.12.2 nur Stress...
  23. But did you check if your network share is connected correctly, with "\\" at the beginning of your folder?