Jump to content

ich777

Community Developer
  • Posts

    15,751
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. You can always troubleshoot by starting it from the command line with the command: lxc-start -F CONTAINERNAME This will run the container in Foreground and should tell you what failed.
  2. List your USB devices with "lsusb": Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 005: ID 8564:1000 Transcend Information, Inc. JetFlash Bus 001 Device 003: ID 05e3:0610 Genesys Logic, Inc. Hub Bus 001 Device 002: ID 1a86:7523 QinHeng Electronics CH340 serial converter Bus 001 Device 007: ID 0781:5567 SanDisk Corp. Cruzer Blade Bus 001 Device 006: ID 05e3:0610 Genesys Logic, Inc. Hub Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Let's say I want to add "Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller" to the container, do a "ls -l /dev/bus/usb/001/004": crw-rw-r-- 1 root root 189, 3 May 22 07:02 /dev/bus/usb/001/004 After that add these lines to the end of your config file for the container: lxc.cgroup.devices.allow = c 189:* rwm lxc.mount.entry = /dev/bus/usb/001/004 dev/bus/usb/001/004 none bind,optional,create=file This is actually untested but this is how it should work.
  3. Already changed the first post, I haven't been able to test everything so far because there are a lot of images... I can so far confirm that Homeassistant Core is working fine, Docker is working fine (with containers who doesn't need privileged rights), I've also made a container that uses noVNC in conjunction with TurboVNC to get a Desktop environment through a browser (xrdp should also work with a few tweaks from what I know).
  4. There you have it: lxc-start -F BionicTest Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing. Freezing execution. It's also failing because systemd is missing. Try to set up a Debian Bullseye container, it should work fine for now if this is a viable option for you. EDIT: You actually have to kill the container or wait until the timeout kicks in if you click on Stop.
  5. This seems possible but I first want to get it running stable and need a few responses if everything works well.
  6. You should be able to do this inside the container itself if the Gateway and the Address are assigned correctly within the container. I think this should help: Click
  7. Is it needed? I really don't know... you can also add it manually...
  8. I play it from time to time. Modding is always up to the user, I don't know every game but I try to help where I can but this is out of my scope... Sorry, maybe someone else has a clue how this is working.
  9. Then I would try it this way, but I really can't help further since I don't know what the Content Manager is.
  10. I'm not too familiar with this in general because I added the Server Manager on request, I can only help with the basic functionality from the container itself. Maybe look on the Wiki on GitHub from the Server Manager: Click
  11. You are talking about the variable Install Assetto-Server-Manager or am I wrong? If yes, simply set it to "true" and you should be able to connect with: http://[IPOFYOURSERVER]:8771
  12. Which settings did you change and which settings file(s) did you change? Did you read the description where the files are located?
  13. Install SSH Server in Debian based containers: Method 1 (recommended) : Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name) : I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) : Now Create a user with the command "useradd -m debian -s /bin/bash" (in this case the newly created username is "debian") : In the next step we will create a password for the user "debian" with the command "passwd debian" (replace "debian" with your preferred username) type in the password two times like above for the root user: Now install the openssh-server with "apt-get -y install openssh-server": After it successfully installed you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "debian" and the password set for the user "debian" (in this example we will connect through a Linux shell with the command "ssh [email protected]" you see the IP address in the LXC tab in Unraid) : Now you are connected through SSH with the user "debian" to your LXC container. Method 2 (not recommended - root connection) : Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name): I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) : Now install the openssh-server with "apt-get -y install openssh-server": Now issue the command: "sed -i "/#PermitRootLogin prohibit-password/c\PermitRootLogin yes" /etc/ssh/sshd_config" (this will basically change your SSH configuration file so that you can login with the root account through SSH) : Restart the sshd service with the command "systemctl restart sshd" to apply the new settings: After that you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "root" and the password set for the "root" user (in this example we will connect through a Linux shell with the command "ssh [email protected]" you see the IP address in the LXC tab in Unraid) : Now you see that you are connected through SSH with the user "root" to your LXC container.
  14. Do you also added: -exec server.cfg to the GAME_PARAMS in the container template? If not your should be also able to add a password by adding this to the GAME_PARAMS: +sv_password yourpassword
  15. Hmmmm, this is really strange, but I think a iPad from 2018 should be able to handle the format... This is really the last thing that I would do but maybe try to factory reset the iPad and see if this helps.
  16. I only can think of a port forwarding issue, have you yet tried to connect from outside your network to the server via direct connection and also if it doesn't lists when you try it from outside? This could also be a hair pin NAT issue.
  17. Try to add this to the Startup Parameters:: +set party_enable "0" ...but I'm not to sure about that since I'm not that familiar with the game, you can try different config files too. See also here: Click
  18. Please post your Diagnostics with the plugin installed.
  19. Do you have maybe another computer where you could put the card in, install the drivers and put a 3D load on it? This would be also a good thing to do first.
  20. LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. cgroup v2 (ONLY NECESSARY if you are below Unraid version 6.12.0): Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you enable cgroup v2 To enable cgroup v2 append the following to your syslinux.conf and reboot afterwards: unraidcgroup2 (Unraid supports cgroup v2 since version v6.11.0-rc4) Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.
  21. From what I see in your Diagnostics the driver installs fine and is also running fine. Can you point me to the people or the threads where they have issues? Is this a specific issue with the containers, then I would recommend to post in the appropriate support thread from the containers itself. Was it running before? Everything is working on my machines, also the Nvidia Driver support thread has no posts in there about such issues (please always post in the appropriate support thread). It would be better you post the log form your T-Rex miner container rather than posting the messages from the syslog about the network connections, but I would recommend to post it in the T-Rex miner container.
  22. So is this issue resolved now? Can you try to open it in Safari and see if it is working? I think this is more dependent on the client than on the server side...
  23. This is really strange since it probes against QuickSync and VAAPI and returns that it's successful, after it tries to transcode with VAAPI it fails and ultimately falls back to software encoding. @always67 what have you set in your transcoding settings in Emby?
×
×
  • Create New...