allanp81

Members
  • Posts

    318
  • Joined

  • Last visited

Posts posted by allanp81

  1. In case anyone ever asks, I got this working quite simply by doing the following:

     

    #!/bin/bash
    
    f=/path/to/file/on/nextcloud/instance
    
    inotifywait -m -e modify "$f" --format "%e" | while read -r event; do
        if [ "$event" == "MODIFY" ]; then
    
            cat $f
            contents=$( cat $f )
            if [ $contents == "on" ]; then
    
                    echo "Starting VM"
                    virsh start "VM NAMe"
    
            elif [ $contents == "off" ]; then
    
                    echo "Stopping VM"
                    virsh shutdown "VM NAMe"
    
            fi
    
        fi
    done

     

    So basically create a text file on your nextcloud instance and then find the path to it via the terminal. If you then edit the file via the nextcloud web interface or mobile app and set the contents to "on" or "off" it will power the VM on or off accordingly.

  2. I only need my VM to be accessible 2 days a week so is there a simple way I could somehow trigger a power on of the VM?

     

    I have nextcloud so I thought is there a way using a script that I could watch for the presence of a file or a change of a file to trigger?

  3. So does this container support nvidia transcoding? I found some info quite a few pages back but I can't get it to work. I have installed the nvidia drivers etc. and have tried adding the NVIDIA_DEVICE_CAPABILITIES and NVIDIA_VISIBLE_DEVICES as container variables but nothing happens.

     

    *Edit: nevermind, I got it working by adding "--runtime=nvidia" to the extra parameters" 

  4. I have an rsync that backs up my main server to a backup server and this used to work absolutely fine but it seems that recently it's started making the 2nd server reboot as it seems to max out 1 of the CPU cores and makes it start overheating.

     

    image.png.c12e63d2a610a2bb0084304f4e5eb447.png

     

    You can see how it oddly spreads the load across the cores. It appears that it's SSHD that causes the large amount of CPU usage.

     

    Sometimes the transfer rates drops from about 110meg/sec to much much lower and the CPU usage drops right down and then subsequently the temps and everything is fine but obviously backup then takes a crazy amount of time. I have the turbo write plugin enabled and set to automatic but doesn't seem to make much difference.

     

    Anyone got any ideas as to why this is happening? The 2nd server is a core i5 but it's literally only used as a copy of my main server.

  5. I've lost the disk that was saving all of my recordings and now I can't start the docker as I presume it's expecting to see some folders in the new location that now don't exist. What's the best way to fix this? If I take a copy of the appdata config and then do a full reset on the docker and then copy the config files back?

  6. Hi all, my board used to have no issues passing through PCI-E nics etc. to VMs but for some reason recently it's not working anymore and I don't know what's going wrong. I've tried all sorts, pci-stub vfio bind etc. I've ACS override on and off but not having much luck. My hardware configuration hasn't changed for years apart from adding an NVME drive maybe a year or 2 ago. Asrock say installing a drive here disables PCIE-5.

     

    When I go to Tools > System Devices it'sdownloadbox-diagnostics-20210601-1340.zip now saying "No IOMMU Groups Available" rather than listing all of the IOMMU groups like it used to.