Skylinar

Members
  • Posts

    38
  • Joined

  • Last visited

Posts posted by Skylinar

  1. confirming that the hints from @Iker & @arifer working as a charm. 

     

    Made minor adjustments to swag homeassistant.subdomain.conf:

     

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name homeassistant.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app YOUR_HOMEASSISTANT_IP;
            set $upstream_port 8123;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    		proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
    
        }
        
        location ~ ^/(api|local|media)/ {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app YOUR_HOMEASSISTANT_IP;
            set $upstream_port 8123;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    		proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }

     

    additional to that (as mentioned) added those lines to configuration.yaml as well as the ban functionality from home assistant if there are too much login attempts.

     

    http:
      use_x_forwarded_for: true
      trusted_proxies:
        - 127.0.0.1
        - 192.168.0.2 # IP address of your unRAID box
      ip_ban_enabled: true
      login_attempts_threshold: 5

     

    I am using Home Assistant on a separate machine than Swag Docker Container - it works without any problems.

     

    Thanks!

    • Thanks 1
  2. On 7/25/2021 at 7:28 PM, ich777 said:

    From in game you should be able to issue commands to the server if you are registered as admin.

     

    Is this something that is really needed?

    I think many people install Oxidemod to RUST from what I know and if that's installed you have other ways to connect to the console if I'm not mistaken.

     

    I checked Oxidemod and you are right, it's not 100% needed if you want to make a server listed in the modded section. But if you want to administrate an "unmodded" server and want to be listed in the official tab this would be a usecase I think. Would it be much effort?

  3. 1 hour ago, ich777 said:

    Sorry for the inconvenience, actually I had an error in my start script that was left over from the switch from x11-vnc to TurboVNC.

     

    Please force a update of the container on your Docker page and try it again.

    Tested it now and it is working. :)

    Confirmed working now, thank you very much!

    • Like 1
  4. I have also had the problem a few times that the Docker service did not stop because a container did not want to kill itself. I have also tried to do this manually and have never been able to successfully kill a Docker container manually. I have also used docker kill containername with no success. I then had to manually unplug my Unraid server, which is really suboptimal. It's good that rebooting worked around the problem for you, however it's not a real solution. 

     

    Has anyone else experienced this and can either confirm that docker kill worked for them or what the correct solution was? Because I would like to know if it is a problem on my machine or a general unraid problem.

     

  5. @vatsalya 

    Due to you are using AMD this method could not work because with my mentioned method you change from "Penryn" to "host" which will pass the host CPU. In your case, it will be AMD and not Intel. At the time I posted the solution I was running Dual Xeons and it could be, that it only works with Intel host CPUs.

    As I am not running macOS VM anymore I sadly can't help much more here.

  6. On 4/24/2021 at 8:53 PM, Skylinar said:

     

    Thanks for the insights. I'm messing around with this and a P2000 and can't even get it working. 

     

    I've added

    
    "mt" : 4

    to the config after the "worker" and started the trex docker in PRIVILEGED mode, is there more I've missed? 

     

     

    Bump, can someone assist?

  7. On 3/17/2021 at 8:56 PM, brimnac said:

    @gamer1pc: try this, it got the Memory Tweak I was having issues with to work.

     

    I modified the config.json file using Midnight Commander. It originally ended like this:

    
    	"worker" : "unraid"
    }

     

    "worker" is no longer the last config option, so add a comma to the end and the new switches after.

    
        "worker" : "unraid",
        "mt" : 4
    }

     

    I don't know for certain, but I'd try something like this (put your worker information in the <name> variable, if applicable):

    
    	"worker" : "<name>",
    	"c" : "LTC",
    	"mc" : "FIRO"
    }

     

    I'm getting some rejected shares, so I'm going to figure that out next. --mt works for me now, though, per my LOG file:

    
    WARN: GPU #0(000100): Zotac GeForce GTX 1080, intensity set to 22, mtweak 3

     

    EDIT: I did need to set it as PRIVILEGED in the docker setup page when using the "mt" flag. I was getting errors that it wasn't in Admin mode; that solved the issue.

     

    Thanks for the insights. I'm messing around with this and a P2000 and can't even get it working. 

     

    I've added

    "mt" : 4

    to the config after the "worker" and started the trex docker in PRIVILEGED mode, is there more I've missed? 

     

    Quote

    20210424 18:42:29 WARN: GPU #0(000c00): NVIDIA Quadro P2000, intensity 22

     

  8. On 4/11/2021 at 4:04 PM, Squid said:

     

    Thanks, but I can't confirm that. I've got a Smart Plug for the power socket and can see in idle consumption of my unraid system (incl. router and small other stuff) of about 204 watts and when the Windows 10 Gaming VM runs (in idle) its about 230 watts - even when it's set to energy saving in windows.

  9. How do you deal with a GPU that is used for a gaming VM? Since GPU is bind via VFIO, I don't see it with the nvidia-smi command and can't see what power state it is in when the VM is turned off. I have a relatively high power consumption and would like to save power in all places if possible. Are there any tips?