enJOyIT

Members
  • Posts

    90
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

enJOyIT's Achievements

Apprentice

Apprentice (3/14)

16

Reputation

1

Community Answers

  1. It was announced weeks before, so you knew what was going on. You had the choice and you obv. missed it because you did nothing. So it was your fault not to act.
  2. I'm sorry to hear that. But you shouldn't blame limetech for your unread E-Mails. Maybe there is a possibility in your special case to make an exception. You should try to contact @SpencerJ. Maybe he can help you out.
  3. You should have received an email informing you of the upcoming price change.
  4. If all your array drives where nvme/ssd drives you would have a lot more writing speed because the slow down are coming from the HDD read heads. And they are moving all around to get the bits for calculating the new parity bits. But it's highly not recommended because of missing trim support. So the answer is: It's a physical problem which limetech never can fix for HDD drives.
  5. Sorry you are talking b****** Buy your licence now and you will receive updates forever und free as they are currently advertised. The news is out for a week now and you can still buy the old licence for the old price. What's the point to wait? For what? To complain afterwards, that the lifetime licence is now more expensive?!
  6. You're absolutely right ๐Ÿ™‚ Copy/Paste lost it ๐Ÿ˜
  7. Hi, try that extra parameter: --hostname='SteamHeadless' --add-host='SteamHeadless:127.0.0.1' --restart='unless-stopped' --shm-size='2G' --ipc='host' -v '/tmp/.X11-unix/':'/tmp/.X11-unix/':'rw' -v '/tmp/tmp/pulse/':'/tmp/tmp/pulse/':'rw' --ulimit='nofile=1024:524288' --device='/dev/fuse' --device='/dev/uinput' --device-cgroup-rule='c 13:* rmw' --cap-add='NET_ADMIN' --cap-add='SYS_ADMIN' --cap-add='SYS_NICE' For me it's running fine with that.
  8. Hi, I started to encrypt my drives and wanted to backup the luks header with your script. Can I do this? Because in step 2 there are a lot "DO NOT USE" infos ๐Ÿ˜„ Are there any problems running the script? Did anyone made a restore which worked? Thanks!
  9. One short question,,, what's about the access.log files? Are they deleted after a while? I'm afraid that they will blow up after some time.
  10. Reply to myself ๐Ÿ™‚ I think I finally found the root cause... I had too much memory allocated to my proxmox VMs in total.... A bit confusing, since this setup ran for months without any issues... But I dug a bit deeper in proxmox and came to that error: root@pve:~# cat /var/log/syslog | grep oom 2023-12-04T21:45:11.083522+01:00 pve kernel: [3490554.611772] CPU 2/KVM invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 2023-12-04T21:45:11.193598+01:00 pve kernel: [3490554.611817] oom_kill_process+0x10d/0x1c0 2023-12-04T21:45:11.195148+01:00 pve kernel: [3490554.612109] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T21:45:11.195619+01:00 pve kernel: [3490554.612361] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=qemu.slice,mems_allowed=0,global_oom,task_memcg=/qemu.slice/215.scope So no unraid issue! ๐Ÿ™‚
  11. Yeah I know it's not officially supported. I think the issue has something to do with zfs! Because my 6.12.5 is still running (since 1 1/2 hour). So maybe bare metal is affected, too! Testing bare metal is currently not possible... Maybe unraid will support VM snapshots... then I'm going to move bare metal.
  12. Hey, running unraid for months without issues. Updated to 6.12.6 right after it was released (coming form 6.12.5). I don't know if it is related to the update, but I didn't change any other thing except unraid. It's an really weird issue, because unraid just turns off without any logging. For example I did an docker update and within downloading unraid just stops and the VM is off and proxmox doesn't show anything... But it's not related to a docker update, because if I run the server and just do nothing (except the docker-apps are running) it turns off, too. This happens within 5 to 15 minutes. Is it possible that there is an issue regarding zfs, because my appdata/docker filesystem is zfs? I'm turned back to 6.12.5 and will check if the same behaviour is still present. Edit: 6.12.5 is now running for 30 minutes without issues. Keep tracking... unraid-diagnostics-20231205-0909.zip
  13. I can't get the server working for weeks now. It worked before, but suddenly it stopped... I can't get into WebGui. It has something to do with xorg: 2023-11-05 11:28:20,213 INFO spawned: 'x11vnc' with pid 1627 2023-11-05 11:28:21,215 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:21,215 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:21,218 WARN exited: desktop (exit status 11; not expected) 2023-11-05 11:28:22,220 INFO spawned: 'desktop' with pid 1650 2023-11-05 11:28:23,230 INFO success: desktop entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:23,230 INFO reaped unknown pid 1656 (exit status 0) 2023-11-05 11:28:23,684 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:24,687 INFO spawned: 'xorg' with pid 1690 2023-11-05 11:28:25,689 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:34,714 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:35,716 INFO spawned: 'xorg' with pid 1835 2023-11-05 11:28:36,718 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:45,741 WARN exited: xorg (exit status 11; not expected) 2023-11-05 11:28:46,744 INFO spawned: 'xorg' with pid 1980 2023-11-05 11:28:47,746 INFO success: xorg entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:50,492 WARN exited: sunshine (exit status 11; not expected) 2023-11-05 11:28:50,492 WARN exited: x11vnc (exit status 11; not expected) 2023-11-05 11:28:50,494 INFO spawned: 'x11vnc' with pid 2030 2023-11-05 11:28:50,496 INFO spawned: 'sunshine' with pid 2032 2023-11-05 11:28:51,497 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-11-05 11:28:51,497 INFO success: sunshine entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) my docker run: docker run -d --name='steam-headless' --net='eth1' --ip='192.168.20.115' --privileged=true -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="unraid" -e HOST_CONTAINERNAME="steam-headless" -e 'USER_PASSWORD'='xxxx' -e 'TZ'='Europe/Berlin' -e 'USER_LOCALES'='de_DE.UTF-8 UTF-8' -e 'WEB_UI_MODE'='vnc' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-9155e6ad-cdc3-137e-786a-3f45292a5ceb' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'DISPLAY'=':55' -e 'MODE'='primary' -e 'PORT_NOVNC_WEB'='8083' -e 'ENABLE_VNC_AUDIO'='false' -e 'ENABLE_EVDEV_INPUTS'='false' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8083]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/Josh5/docker-steam-headless/master/images/steam-icon.png' -v '/mnt/docker/appdata/steam-headless':'/home/default':'rw' -v '/mnt/cache/games/':'/mnt/games':'rw' --hostname='SteamHeadless' --add-host='SteamHeadless:127.0.0.1' --restart=unless-stopped --shm-size=2G --ipc="host" -v '/tmp/.X11-unix':'/tmp/.X11-unix':'rw' -v '/tmp/tmp/pulse':'/tmp/tmp/pulse':'rw' -v '/dev/input':'/dev/input':'ro' --ulimit nofile=1024:524288 --runtime=nvidia 'josh5/steam-headless:latest' 03b1acf60cdca9bc0620d407d1f298b90165138459f1a706ae63e2a230194add I tried with/with out dummy plug, Display ID 0,1,55 ... Same behaviour On my unraid host: root@unraid:/tmp/.X11-unix# ls run/ Is there something missing here? I don't know what I can do else?! Maybe somebody has an idea? Thanks! P.S. GPU is an RTX A2000