Jump to content

ezhik

Members
  • Posts

    466
  • Joined

  • Days Won

    1

Posts posted by ezhik

  1. 5 hours ago, testdasi said:

    How? Basically just "plug-and-play" albeit wirelessly.

     

    RDP is essentially just a glorified live screen capture software. The acceleration is between the game and the GPU. The GPU creates an output display. That display is normally out to the monitor. RDP captures that display and send it to you over the network (which explains the latency).

     

    Different streaming softwares differ by where and how they capture the output from the GPU.

     

    You can even try it for yourself (there are several free solutions out there, even open-source I think) with your current workstation. No need to commit to Unraid before you have a proof of concept.

     

    I am aware how RDP works. Unless something has changed over years, RDP & 3D acceleration was not something that went well together. If what you are saying is true, that means you can even do light gaming over RDP. That's impressive.

  2. On 6/25/2019 at 8:19 AM, testdasi said:

    Could this game be played over thin-clients?

    Yes. Network latency is a problem, big or small depending on which software you use to stream. I can play remotely from a Surface 3 to my main VM via RDP which is about as janky as it gets but it is still acceptable (albeit my game is hardly Call of Duty).

     

    In other words:

    Is it capable of provisioning VM's with GPU's?

    Do I have to be physically attached to the host in order to use the GPU?

    Can I have hardware accelerated graphics over Ethernet?

    Yes with a "but". Your hardware (CPU, motherboard, GPU) has to be happy with PCIe passthrough. Some pitfalls are for example, rubbish IOMMU group, reset bug (GPU stops working after a VM restart), bad BIOS (recent Ryzen update), CPU not supporting etc. You might want to search on the forum to see who has a similar hardware to what you are thinking of and ask the members directly for experience.

     

    Use scenario:

    My wife is an architect and she works with 3D software.

    At this moment she is looking for a new workstation.

    I am searching for a solution in which she can work remotely from the [powerful] workstation at home or clients.

    I don't see any problem (except for the pitfalls mentioned above). I run GPU-accelerated video editing over RDP (again from a Surface 3 to main VM) so it's not just gaming that works.

    How are you using GPU 3D Acceleration over RDP?

  3. Presently if disks are running hot (45C+), the array health check reports as 'FAIL'. This is technically not correct:

    1) Parity is valid

    2) No disks have reported failures

    3) Disks are operating properly

     

    Proposals:

    1) Create an option to report array as 'WARN' if disks are running hot

    2) Ignore high disk temps and only validate parity and cache pool status and report accordingly, which in this case should be 'PASS'.

     

    NOTE:

    - Same thing goes for array status report (health check) when parity check is running, it shouldn't be FAIL, it should be PASS.

    - Now if it is rebuilding - it should be WARN as the correction process has begun and it is in progress.

    - If no rebuilding is in place, no parity check and we have a failed disks in either array or a cache pool - FAIL, that's a legitimate failure that should trigger a failure notification.

     

    Cheers.

     

  4. On 6/7/2019 at 7:01 PM, JasonJoel said:

    I like Sophos... When big new releases come out I still tinker with it in my lab.

     

    I used to use it but had to move away from it when it had a 50 device limit for home use (I have >90 devices on my network, most of which need to connect outbound at some point)...

    You can segregate those and put them behind another NAT ;)

  5. 3 hours ago, binhex said:

    Current way people are doing something similar is using a cache drive (sdd) and an unsigned disk (sdd) then have downloads on one, and metadata and docker loopback on the other, best solution at present.

    Sent from my EML-L29 using Tapatalk
     

    Nice, noted. This doesn't provide you with redundancy, but you can compensate with daily backups through other plugins.

  6. 25 minutes ago, CHBMB said:

    This is covered in the first two posts

    A gentle RTFM. I see...

     

    The answer was:

    --
    6.  We will produce one Nvidia build per Unraid release, we will not be updating the drivers multiple times for each Unraid version, unless there is a critical bug that demands this.  So please don't ask.

    --

  7. 12 hours ago, NAS said:

    I think I can actually replicate this now.

     

    If i mount a USB drive and copy files continuously to my SSD cache drive which also is the location of my docker loopback image then after a few minutes docker starts to not respond which obviously ruins the web GUI as well.

     

    I routinely copied files in this way in all previous versions and the cache drive seems fine (its pretty new) and there are no errors in any log that I can see.

     

    The SSD is attached to a motherboard SATA port directly.

     

    I am pretty sure it is IO WAIT as load sky rockets. Will wait and see if its an "only me" problem.

     

    Would be nice to have 'cache groups' for different purposes.

     

    For example:

    "cache group":storage -> used strictly for storage

    "cache group": docker -> used strictly for docker images

    "cache group": VMs -> used strictly for VMs

     

    That way they won't have to fight for Disk I/O.

     

    One can only dream...

    • Like 1
    • Upvote 1
  8. I can attest that copy/pasting code blocks from the forum has caused major headaches. The one mentioned above for nvidia plugin is a perfect example. Took me at least 40 minutes to figure out why the nvidia card wasn't getting passed through, I had to copy the command line generated when the docker gets created/updaets and paste it into a text file then save it and open it with vi/vim and notepad++ with show all characters to see where the issue was. It's rough to troubleshoot...

  9. 50 minutes ago, CHBMB said:

    OK, announcement.

     

    Any stupid posts asking why this isn't released, if they can build it themselves, etc etc prepare to hear my wrath.

     

    We're not complete noobs at this, @bass_rock and I wrote this, and when it's broken we'll do our best to fix it, any amount of asking is not going to speed it up.

     

    If anyone thinks they can do better, then by all means write your own version, but as far as I can remember nobody else did, which is why we did it.

     

    We're working on it. 

     

    ETA: I DON'T KNOW

     

    When that changes I'll update the thread.

     

    My working theory why this isn't working is that there's been a major GCC version upgrade between v8.3 and v9.1 so I'm working on trying to downgrade GCC which is difficult as I can't find a Slackware package for it, so I'm trying to build it from source, and make some Slackware packages so I can keep them as static sources, which is not as easy as I hoped.

     

    Thank you @CHBMB, your work is greatly appreciated! We will patiently await the updates. 

    • Like 1
×
×
  • Create New...