Xaero

Members
  • Posts

    384
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19 2019

Xaero had the most liked content!

Recent Profile Visitors

4710 profile views

Xaero's Achievements

Contributor

Contributor (5/14)

110

Reputation

1

Community Answers

  1. Anybody able to get the GPU to idle properly with this docker? As long as steam is running the lowest power state it will enter is P0 instead of dropping to P8 (2d clocks). This is a pretty substantial power draw difference when the docker isn't actually doing anything - 20W in P0 vs 7W in P8 on my system. I had the debian steam nvidia docker before and was running it without this issue. For now I'm just closing steam within the docker and then manually opening it up again when I want to use it, but thats kind of a PITA and requires manual intervention constantly.
  2. Would it be possible to add Core Keeper dedicated server? AppID 1963720 I've been playing with it a little and have almost got it starting. They do some unconventional stuff with their launch.sh script (launch.sh calls gnome-terminal with $PWD/_launch.sh - gnome-terminal isn't present for obvious reasons so that bombs out) I'll keep plugging at it and see if I can get it rolling on my own. Thanks for the existing dockers!
  3. Trying to move to this from the debian buster-nvidia docker and I can't seem to get the VNC port(s) to change. I'm adding: ` -e PORT_VNC="5904" -e PORT_AUDIO_STREAM="5905"` EDIT: I realized that at some point my GUI mode started running on the nvidia card instead of the integrated GPU for my board. This was undesirable and I corrected it (my board has IPMI and the integrated graphics must be used for IPMI to be able to capture the screen). After correcting that and purging this container and setting it back up everything worked as expected. Not sure why it didn't work with the GUI running on the GPU, or why it also didn't work with DISPLAY=":0" No worried as its working as intended now.
  4. Not sure how to best approach the situation, but the JRE recently introduced a change that breaks compatibility with older Forge modpacks. sun.security.util.ManifestEntryVerifier was changed in a way that breaks startup. The container just updated and now I cannot start my modpacks so I'll have to downgrade for now I suppose. See here: https://github.com/McModLauncher/modlauncher/issues/91 Note that only packs older than the forge version that fixes this would be affected, but still a problem.
  5. The stream froze because ffmpeg is getting killed as a high memory consumer when the OOM procedure runs: Jan 30 01:12:57 Alexandria kernel: Lidarr invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 ... ... Jan 30 01:12:57 Alexandria kernel: Tasks state (memory values in pages): Jan 30 01:12:57 Alexandria kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Jan 30 01:12:57 Alexandria kernel: [ 19532] 0 19532 1466611 1447233 11796480 0 0 ffmpeg ... ... Jan 30 01:12:57 Alexandria kernel: Out of memory: Killed process 19532 (ffmpeg) total-vm:5866444kB, anon-rss:5789584kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:11520kB oom_score_adj:0 Jan 30 01:12:58 Alexandria kernel: oom_reaper: reaped process 19532 (ffmpeg), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB That just tells us what got killed as a result of the oom state, not what was consuming the most memory. The biggest memory consumer at the time ffmpeg was killed was this java process: Jan 30 01:12:57 Alexandria kernel: Tasks state (memory values in pages): Jan 30 01:12:57 Alexandria kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Jan 30 01:12:57 Alexandria kernel: [ 21823] 99 21823 3000629 145993 2207744 0 0 java But this table only keeps track of running processes, not files stored in tmpfs, for example. From what I can see, tmpfs doesn't appear to be using that much memory.
  6. How much data are you piping through BORG? I see above it's configured to use /.cache/borg as its cache directory. "/" is in memory on Unraid, so it's going to cache everything in memory while it does its work. If you have individual files larger than your remaining memory this is likely to be the cause. As far as the high CPU utilization - that's more or less expected with BORG. It's going to calculate compressed, hashed backups of files and make sure they aren't duplicates of one another using the hashes before it does the compression. Calculating hashes is computationally complex, so it takes CPU power. Compression is also computationally complex and also takes CPU power. That's expected. As far as the BAR messages in the log - that's not related. Seems like the nvidia driver is asking the kernel to reserve more memory than the bus uses - which isn't completely outside of reason, but also isn't how things *should* be written according to the Linux kernel; its mostly an informational/debugging message.
  7. You have a few options, you can burn the image to a physical disk and then boot it on real hardware - YMMV depending on OS (linux usually doesn't care) on what does/doesn't work hardware or software wise on real bare metal. For example Windows may deactivate itself if the physical configuration is too different from the VM it was running on. The other option is to use a HyperVisor or VM solution to boot the virtual hard disk file. You can import machine into VirtualBox, VMWare, set up Qemu/KVM on your bare metal OS and boot the image there etc. I would recommend snapshotting before migrating both directions (a backup before you go on a trip, and a backup before you replace the VM image afterwards. Another option is to set up Apache Guacamole or a similar remote access application and just remote into the server/vm and do your work remotely just as if you were at home working on the VM. Of course, this is restricted by your internet speed and access abroad, so might not be applicable.
  8. As the title reads, I was using BIND_MGT to keep the management interface on my local network only using that subnet, port 80/443. From there, I had a Heimdall Docker answering on all remaining interfaces on 80/443. This was working prior to the 6.10 update and no longer does. Looking at initscript `/etc/rc.d/rc.nginx` for nginx I can see that all references to the variable BIND_MGT have been removed as well as the conditional that made this logic work before. Was this an intentional change? How can I reliably set up the Unraid WebUI to not bind to all interfaces and take up 80/443? My end goal is for no docker containers to be able to access the WebUI, and for users connected via the non-LAN access WireGuard to be unable to access the WebUI. I still want there to be a convenient landing page at the server's hostname, just no management access. I did put this in the general support section but received no response after over a month so opening this as a bug report.
  9. Going to go ahead and open a bug report on this since I'm getting zero traction here.
  10. Actually, that isn't feasible for me. I don't want to be stuck with an appliance that is perpetually out of date, and I also don't really have the bandwidth in my life to constantly build my own boot images for a single script, that was part of my reason for going with Unraid - to not have to constantly maintain a full set of config files and startup scripts every time something happens.
  11. I suppose I will just roll back indefinitely or maintain my own copy of this particular initscript.
  12. Correct. The Qemu/KVM Hypervisor is just a process running under the Linux kernel, so the Linux kernel's I/O, memory, and CPU scheduler handle dishing out resources to it. Setting things like the number of cores only result in the guest OS behaving like it has those resources available to it. When the guest OS creates a thread, the linux kernel ultimately decides which CPU core it actually runs on. An exception to this would be to pin the core assignments (this forces the kernel to always run those threads on the specified CPU) and to isolate the cores from the kernel itself (this prevents the kernel from running native Linux processes on the specified core) by combining CPU pinning and isolation you can effectively section off your CPU so that ONLY threads spawned by the guest OS are running on those CPU cores.
  13. EDIT2: Just leave the core count untouched. As long as you aren't isolating those CPU cores from Unraid, Linux should be able to use the cores even though there is a VM running on them (since the VM is idle). Changing memory should not trip the WGA activation message.
  14. So, I have a use case for the BIND_MGT option. I want the Unraid WebUI to only be available on my local LAN. Users connecting via WireGuard using the hostname of my server instead receive Heimdall running on a docker. I have a second management WireGuard that provides LAN access so I can access the WebUI when connected via that WireGuard tunnel, or I have Apache Guacamole for remote administration. I had this working fine, I edited /boot/config/ident.cfg to have BIND_MGT="yes" and restarted NGINX and was able to run Heimdall with manual port assignments for the correct subnets. When connecting on my lan I got the unraid dashboard, and when connecting thru wireguard I got Heimdall. Everything was peachy. I have since updated to 6.10, and the heimdall docker can no longer start because port 80 is already in use by the WebUI. I double checked I had everything set correctly, and I do. But looking at /etc/rc.d/rc.nginx, all references to BIND_MGT have been removed, indicating that I can no longer tell the Unraid WebUI to only listen on a specific interface. Is this an intended change? Am I going to have to start maintaining an override of this file and manually start the WebUI via script to maintain my intended functionality? I don't recall seeing this in the changelogs, but I could easily have overlooked it.
  15. Keep in mind that they may all very well pass in those first two slots. The slot in question may be where the error occurs - but that doesn't always mean the slot itself is bad. The memory controller isn't on the Northbridge anymore, it's on the CPU itself. Because the LGA2011 (and it's child variants, both narrow and wide ILM) is such a large socket, it's really really easy to get not enough mounting pressure on the first go around and miss just a couple of pins or have just slightly too high of resistance on a couple of pins and lose a memory channel/slot. If you find out just one slot has the problem, pop that CPU out, clean it and put it back in. With any luck the problem will just be gone.