shrmn

Members
  • Posts

    6
  • Joined

  • Last visited

Posts posted by shrmn

  1. On 1/28/2021 at 8:52 PM, ich777 said:

    A workaround for this would be to do it on your own for now until a fix is released, since Unraid is based on Slackware this is pretty straight forward...

     

    Open up a Unraid terminal and enter the following:

     

    
    cd /tmp
    wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/patches/packages/sudo-1.9.5p2-x86_64-1_slack14.2.txz
    installpkg sudo-1.9.5p2-x86_64-1_slack14.2.txz
    rm -rf /tmp/sudo-1.9.5p2-x86_64-1_slack14.2.txz

     

    You can also append this to your 'go' file to install it on every reboot.

     

    I know this is only a temporary solution but it's a solution that works.

    After that you can issue 'sudo -V' in the terminal and you will see that you now have sudo 1.9.5p2 installed.

     

    (Btw the package is from the official Slackware repo)

     

     

     

    EDIT: Wrote a quick Plugin if this is what you are after, it will do basically the same and you don't have to edit anything (works only from Unraid version 6.8.2 to 6.9.0rc2):

    
    https://raw.githubusercontent.com/ich777/unraid-sudo-patch/master/CVE-2021-3156.plg

     

     

    Got an error running the plugin:

     

    -----------------Downloading sudo 1.9.5p2, please wait...!---------------------
    -----------This could take some time, please don't close this window!----------
    
    -----ERROR - ERROR - ERROR - ERROR - ERROR - ERROR - ERROR - ERROR - ERROR------
    ------------------------Can't download sudo 1.9.5p2-----------------------------
    plugin: run failed: /bin/bash retval: 1

     

  2. Had a local business need to expose their CRM server to the public net today and the owner did not want to open any ports. Cloudflare's Argo Tunnel came to mind. 

     

    They had an existing Unraid server handling file shares and backups, so started looking at ways to leverage this (actually underutilised) server. Thought I'd share the steps I got to getting the tunnel to work here.

     

    Below steps assume understanding/experience with reverse proxy setups and User Scripts.

     

    The setup consists of two broad steps:

    A. Install any reverse proxy as a Docker image (I used Nginx Proxy Manager) and take note of the exposed port / IP.

    • In this example, I will be setting only the HTTP proxy on port 1880.
    • This reverse proxy is the entry point of the tunnel. Configure this proxy to connect to whichever other services you have.

     

    B. Installing cloudflared and run on startup

     

    1. ssh into your server and download the cloudflared binary
      wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz
    2. unzip the tgz
      tar -xvzf cloudflared-stable-linux-amd64.tgz
    3. Login to Cloudflare (this will produce a URL. Open that URL on your browser)
      ./cloudflared tunnel login
    4. Once authenticated, verify that the tunnel works (change your.hostname.com to your hostname)
      ./cloudflared tunnel --hostname your.hostname.com --url http://localhost:1880

      Then visit your.hostname.com, you should see a Cloudflare welcome page. If DNS hasn't propagated, try setting your DNS resolver to 1.1.1.1

    5. Save your configuration as a YAML-formatted file in ~/.cloudflared/config.yml; The contents should look like this:

      hostname: your.hostname.com
      url: http://localhost:1880
      

       

    6. Copy the contents of ~/.cloudflared into /etc/cloudflared

      mkdir -p /etc/cloudflared
      cp ~/.cloudflared/config.yml /etc/cloudflared/
      cp ~/.cloudflared/cert.pem /etc/cloudflared/
    7. Install the User Scripts plugin if you haven't already, and create a new script. I named mine cloudflared
    8. Remove the default description file and copy the contents of the script below:
      #!/bin/bash
      #description=Launches cloudflared with config and cert loaded in /etc/cloudflared
      #backgroundOnly=true
      #arrayStarted=true
      
      # Above lines set the script info read: https://forums.unraid.net/topic/48286-plugin-ca-user-scripts/page/7/?tab=comments#comment-512697
      
      # Set path to cloudflared binary
      configpath=/etc/cloudflared
      
      echo "Starting Cloudflared Binary with config and cert in $configpath"
      
      /root/cloudflared --config $configpath/config.yml --origincert $configpath/cert.pem
      
      echo "Exiting Cloudflared Binary"
      
      exit
    9. Refresh the User Scripts page and set the script to run on startup of array
      image.thumb.png.9c7137d9e7ffb55621f988d6346e20b0.png
    10. View the logs to ensure that your routes are secured and established. You should see something like this:
      Starting Cloudflared Binary with config and cert in /etc/cloudflared
      time="2019-07-24T01:36:27+08:00" level=info msg="Version 2019.7.0"
      time="2019-07-24T01:36:27+08:00" level=info msg="GOOS: linux, GOVersion: go1.11.5, GoArch: amd64"
      time="2019-07-24T01:36:27+08:00" level=info msg=Flags config=/etc/cloudflared/config.yml hostname=your.hostname.com logfile=/var/log/cloudflared.log origincert=/etc/cloudflared/cert.pem proxy-dns-upstream="https://1.1.1.1/dns-query, https://1.0.0.1/dns-query" url="http://localhost:1880"
      time="2019-07-24T01:36:27+08:00" level=info msg="Starting metrics server" addr="127.0.0.1:38457"
      time="2019-07-24T01:36:27+08:00" level=info msg="Autoupdate frequency is set to 24h0m0s"
      time="2019-07-24T01:36:27+08:00" level=info msg="Proxying tunnel requests to http://localhost:1880"
      time="2019-07-24T01:36:30+08:00" level=info msg="Connected to HKG"
      time="2019-07-24T01:36:30+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]"
      time="2019-07-24T01:36:30+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional"
      time="2019-07-24T01:36:32+08:00" level=info msg="Connected to SIN"
      time="2019-07-24T01:36:32+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]"
      time="2019-07-24T01:36:32+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional"
      time="2019-07-24T01:36:33+08:00" level=info msg="Connected to HKG"
      time="2019-07-24T01:36:33+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]"
      time="2019-07-24T01:36:33+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional"
      time="2019-07-24T01:36:34+08:00" level=info msg="Connected to SIN"
      time="2019-07-24T01:36:34+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]"
      time="2019-07-24T01:36:34+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional"

       

    11. Voila!

     

    • Like 1
    • Thanks 2
  3. Not sure if you've gotten to it yet, but I got my GoodSync Connect docker working fine using bridged networking, exposing port 33333

     

    <?xml version="1.0"?>
    <Container version="2">
      <Name>gsdock</Name>
      <Repository>grin/gsdock</Repository>
      <Registry>https://hub.docker.com/r/grin/gsdock/</Registry>
      <Network>bridge</Network>
      <Privileged>false</Privileged>
      <Support/>
      <Overview/>
      <Category>Backup:</Category>
      <WebUI/>
      <TemplateURL/>
      <Icon>https://lh5.ggpht.com/wKQx6--IZ50yitxPX24gbsO2rrehdaGNw9J4rHceHlwNPFrNY7CfCO3UDQub7GrsQr4=w300</Icon>
      <ExtraParams/>
      <DateInstalled>1510755367</DateInstalled>
      <Description/>
      <Networking>
        <Mode>bridge</Mode>
        <Publish>
          <Port>
            <HostPort>33333</HostPort>
            <ContainerPort>33333</ContainerPort>
            <Protocol>tcp</Protocol>
          </Port>
          <Port>
            <HostPort>33338</HostPort>
            <ContainerPort>33338</ContainerPort>
            <Protocol>udp</Protocol>
          </Port>
          <Port>
            <HostPort>33339</HostPort>
            <ContainerPort>33339</ContainerPort>
            <Protocol>udp</Protocol>
          </Port>
        </Publish>
      </Networking>
      <Data>
        <Volume>
          <HostDir></HostDir>
          <ContainerDir>/data</ContainerDir>
          <Mode>rw</Mode>
        </Volume>
      </Data>
      <Environment>
        <Variable>
          <Value></Value>
          <Name>GS_USER</Name>
          <Mode/>
        </Variable>
        <Variable>
          <Value></Value>
          <Name>GS_PWD</Name>
          <Mode/>
        </Variable>
        <Variable>
          <Value>UNRAID Tower</Value>
          <Name>GS_ID</Name>
          <Mode/>
        </Variable>
      </Environment>
      <Config Name="GSTP Port" Target="33333" Default="" Mode="tcp" Description="For Goodsync Connect protocol" Type="Port" Display="always" Required="true" Mask="false">33333</Config>
      <Config Name="GoodSync Broadcast 1" Target="33338" Default="" Mode="udp" Description="GoodSync Broadcast 1 for local net" Type="Port" Display="always" Required="false" Mask="false">33338</Config>
      <Config Name="GoodSync Broadcast 2" Target="33339" Default="" Mode="udp" Description="GoodSync Broadcast 2 for local net" Type="Port" Display="always" Required="false" Mask="false">33339</Config>
      <Config Name="GS_USER" Target="GS_USER" Default="" Mode="" Description="GoodSync Connect Username" Type="Variable" Display="always" Required="false" Mask="true"></Config>
      <Config Name="GS_PWD" Target="GS_PWD" Default="" Mode="" Description="GoodSync Connect Password" Type="Variable" Display="always" Required="false" Mask="true"></Config>
      <Config Name="GS_ID" Target="GS_ID" Default="" Mode="" Description="(optional) GoodSync Connect ID to identify this server" Type="Variable" Display="always" Required="false" Mask="false">UNRAID Tower</Config>
      <Config Name="GoodSync Connect Container Path" Target="/data" Default="" Mode="rw" Description="GoodSync Connect Container Path" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/backups</Config>
    </Container>

     

  4. Update to the issues I faced above - re-created the entire VM this time stubbing the NVMe drive. The console flooding of "/dev/nvme0n1: No such file or directory" no longer happens.

     

    Also this time made the min and max memory settings fixed at 16GBs and the memory use is down to nominal 2GB levels at startup.

     

    Everything works fine with a steady 100+fps on Overwatch with the RX580, but! Sound suddenly drops out about 5-10 mins into any game or video (on YouTube). The only way to fix it is to restart the VM. I've tried an external USB-based Creative Sound Blaster HD, but the same issues persist. 

  5. Thanks for your very helpful videos that convinced me to give UnRAID a shot after my NAS died and took 3 WD Red drives with it.

     

    So I've just started dabbling in UnRAID, and the main thing I needed was to have a gaming VM through an NVMe drive, running alongside the 6-drive array. (I basically took my gaming rig and converted it for UnRAID)

     

    I followed @billington.mark 's steps and used the @gridrunner's scripts to have the OVMF files copied over upon array start. 

    Windows 10 (build 1709 / Fall Creators Update) installed really quickly.

     

    At this point I had forgotten to have the VirtIO drivers ISO mounted to get the drivers in, so I did, and the ethernet, serial and other drivers installed successfully. 

     

    I also passed through an AMD RX580, through which I had been setting all these up. So rightfully, the next thing I did was to open Edge, and obtain the latest RX580 drivers. Installation went through smoothly, and I chose to restart the VM through the standard Start Menu. Boom. BSOD. Error said something about attempting to write into read-only memory. So it rebooted on its own and its now stuck at the Tianocore logo, with nothing else.

     

    I turned on my monitor connected to the host machine's integrated graphics and noticed that this keeps appearing randomly:

     

    /dev/nvme0n1: No such fle or directory

    This has appeared 13 times in the last hour of getting Windows running. Any idea what's causing this?

     

     

    EDIT: 

    Maybe I should add that I configured it to start with 2GB of ram and allow it go up to 8GB (I have 32GB non-ECC on the host). Task Manager shows 7.3/8.0GB used right after startup. Is this normal?

     

    4 out of 8 threads (2 cores) of the host's i7-6700 is assigned to the VM.