Jump to content

aterfax

Members
  • Posts

    61
  • Joined

  • Last visited

Posts posted by aterfax

  1.   

    8 minutes ago, Tswitcher29 said:

    Kinda rude not everyone has perfect grammar and i suck at it I'm sorry for that idk what's so hard to understand about what I said instead of making fun of my grammar how about you ask me questions to better understand what you need to help me 

    Your poor grammar makes it very difficult to read. Try using bullet points, commas and paragraphs to separate your different points.

    Post diagnostics - what space invader video? Pastebin the error 43 stuff.

    Do you mean this error 43?

    https://mathiashueber.com/fighting-error-43-nvidia-gpu-virtual-machine/

  2. Just now, 1812 said:

    We’ve had a few issues in regards to it not compiling starting 2 RC,s ago including but not limited to not having time to work on why it’s not compiling due to being away from home, and wandering into technical areas beyond the scope of our collective knowledge.  This combined with the problems the nvidia is currently having building caused the RMRR patch to fall on the back burner for the moment as they may (or may not) be similar issues and/or be a new second issue that will need to be resolved.
     

    with that said, I’m hoping to spend a little more time on this starting next week when I return from traveling.

    Personally I have to try to integrate the nvidia patch, rmrr patch and dockerswarm patching so I understand where you are coming from.

    Don't worry about delays, the effort is much appreciated.

  3. 12 hours ago, m0ngr31 said:

    Is there anything we can do to help get this working with 6.8.*?

    In the short term you could compile it yourself with the RMRR patch - but I'd imagine you will come across whatever the issue is in terms of compatibility.
     

    https://github.com/AnnabellaRenee87/Unraid-HP-Proliant-Edition

    Edit: I might be able to help with maintaining it?

  4. For users who want the letsencrypt in Poste IO working but are already using a letsencrypt docker, all you need to do is share the .well-known folders between your Poste IO and letsencrypt docker i.e. in the Poste IO docker config:
    image.png.4862db97996c5f5977f151808390ee98.png

     

    This will not work if your domain has HSTS turned on with redirects to HTTPS (or this was the case with the version of letsencypt in the docker a while ago as it was reported here: https://bitbucket.org/analogic/mailserver/issues/749/lets-encrypt-errors-with-caprover )


    You can instead mount the default certificate files in the docker directly to the certificates from the letsencrypt/SWAG docker.

    To be explicit with my volume mounts for SSL working:

     

    /data/ssl/server.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/cert.pem
    /data/ssl/ca.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/chain.pem
    /data/ssl/server.key → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/privkey.pem


    I do not recall the exact details of why the above is optimal but I suspect that Poste is handling making it's own full chain cert which results in some cert mangling if you do give it your fullchain cert rather than each separately (various internal services inside the docker need different formats) - I believe that without the mounts as above the administration portal will be unable to log you in.

    • Like 2
    • Thanks 1
  5. 1 hour ago, Xaero said:

    to my knowledge they only have the nvidia docker runtime (which could easily be packaged) and the nvidia driver blobs (which could easily be packaged into DKMS format) 
    It would reduce the amount of effort required to maintain those builds. I'd approach it, but there's a reason I don't maintain packages anywhere, currently. Not to mention Unraid and limetech's 100% anti-package stance.

    We should probably summon @CHBMB for his thoughts on that for the unraid nvidia plugin then.

  6. 11 hours ago, Xaero said:

    Ideally, when just needing to add kernel modules, you would compile just the required modules into *.ko for the purposes of unraid. Otherwise you run into the issues we see above with the monolithic nature of Linux and the fixed versioning of unraid (something I personally disagree with) 

    DKMS would be the ideal solution in the case of the Nvidia build - as instead of installing a new unraid build custom tailored with Nvidia baked in, one could select from available Nvidia driver packages, DKMS would do it's magic and poof you'd have Nvidia kernel modules for the active running kernel.

     

    In the case of standard kernel modules - I think we could probably start to approach it like openwrt and other embedded distributions have in the past:

    Compile the kernel with minimal built-ins to save on memory footprint and distribution size (unraid does this)

    Compile ALL of the modules as optional packages and make them available through the repository. (nerdpack is our repo, but it'd fill it with a lot of modules)

     

    This way if someone runs into a kernel module that they need, which would normally require recompiling a new kernel, they could just grab the package for the module and now it can be added to modules-load.d, or in the mkinitcpio.conf for early loading.

     

    Just my 2c from working with embedded Linux distros in the past and seeing how things have been handled there.

     

    Here's an example of a set of kmod-* packages for openwrt:

    https://archive.openwrt.org/snapshots/trunk/ar71xx/generic/packages/kernel/

    I think the custom packaging is necessary for the nvidia stuff in order for acceleration to be passed through to dockers (i.e. the CUDA libraries, which either need to be present in the install bzroot on the USB or to be installed at boot time.

    WRT to modules, I agree we could do with the Unraid devs maintaining and standardising a repo of kernel modules afaik. (That way there's a chain of trust in the compilation rather than just random users and modules can be tested/tweaked to match releases if needed.)

  7. Probably should have added -

    I have this working via doing the following:

    Install from the nvidia unraid plugin - boot and check it works! i.e. GPUs showing up with 'nvidia-smi'

    Extract and save the nvidia kernel modules from the bzmodule provided by the nvidia unraid plugin. They will be in the KERNELVERSION/kernel/drivers/video
    Keep these files for later.

    If you need to use the HP RMRR patch then apply the following ensuring you change the file path to the correct one between step 1 and step 2:
     

    
    ##Make necessary changes to fix HP RMRR issues
    sed -i '/return -EPERM;/d' ./kernel/drivers/iommu/intel-iommu.c
    

    Compile and install the bzimage-new and bzmodules-new using the method above in this thread with the correct options for IP_VS support replacing the existing files in /boot (they will be called bzimage and bzmodules - make a backup just in case!!!)


    Boot up and 'modprobe ip_vs' then check it is working via docker swarm. (Modprobe might not be  needed but I didn't see it loaded...)

    For testing I hosted a default nginx on my nodes then used curl to check they were alive with:  'curl localhost:8888'

    docker service create --name my_web \
                            --replicas 3 \
                            --publish published=8888,target=80 \
                            nginx


    Assuming you get the standard nginx response from each node, it should be working.

    Now to get nvidia stuff working again.

    Now navigate back to where you saved the nvidia modules and 'insmod nvidia.ko' and the remaining modules.

    Now check that they are loaded and your GPUs show up with 'nvidia-smi'

    I've had no success trying to include the nvidia stuff into the former compilation as yet but I suspect it is absolutely key for you to ensure you are using the bzroot file from the unraid nvidia plugin as it includes some nvidia/CUDA software required.

    I've added my working bzmodules and bzimage that I compiled with RMRR patch to the post so people can skip compiling if they wish.
    Also added the nvidia modules to the post so you don't need to extract them with 7zip etc...
     

    bzmodules bzimage

    nvidia-modeset.ko nvidia-uvm.ko nvidia.ko nvidia-drm.ko

  8. Hi all,

    I cannot see any downsides/issues considering people have been doing this to enable docker swarm support.

    Should be a simple change to the .config file used for compiling the unraid kernel by devs.

    Could we get IP Virtual Server support turned on in the kernel as per:



    This should only require the IP_VS options enabling shown with current status below i.e. -

     

    Generally Necessary:

    - CONFIG_NETFILTER_XT_MATCH_IPVS: missing

    - CONFIG_IP_VS: missing

     

    Optional:

    - CONFIG_CGROUP_HUGETLB: missing

    - CONFIG_NET_CLS_CGROUP: missing

    - CONFIG_CGROUP_NET_PRIO: missing

    - CONFIG_IP_VS_NFCT: missing

    - CONFIG_IP_VS_PROTO_TCP: missing

    - CONFIG_IP_VS_PROTO_UDP: missing

    - CONFIG_IP_VS_RR: missing

    - CONFIG_EXT4_FS: enabled (as module)

    - CONFIG_EXT4_FS_POSIX_ACL: enabled

    - CONFIG_EXT4_FS_SECURITY: missing 

     

  9. @CHBMB looks like I am not the only one with nvidia and needing some kernel compile options to support docker swarm.

    Have to say that my compile is further complicated by the requirement for the HP RMRR patch and the nvidia support!
     

     

    • Upvote 1
  10. On 6/11/2019 at 1:28 AM, CHBMB said:

    The DVB source scripts are here.  https://github.com/linuxserver/Unraid-DVB

     

    The Nvidia source scripts we deliberately are keeping closed as we're a tiny bit scared of Nvidia and the possibility of people using them to circumvent certain Nvidia restrictions.

    I'm trying to compile with support for both nvidia and turning on CONFIG_NETFILTER_XT_MATCH_IPVS in order to support docker swarm.

    I know you probably want to keep them closed source to prevent circumvention of nvidia restrictions but I can tell you through testing. Having access to your kernel compile scripts is not needed to achieve this.

    Could you open source the scripts or give me a hand in the order of operations in order to compile getting this working?

    At the moment this is proving problematic due to the lack of the nvidia modules in the compiled output.

    Edit: not the only one apparently -

     

  11. 15 minutes ago, saarg said:

    Might not be part of the driver install, but some other part needed to get it to work then. I didn't play around with it. 

    Just have some patients and you will see it when we release it. 

    Ha, no worries I appreciate the work.

    Since I sorted mine by with host drivers and loading drivers in the dockers I'm kinda curious about how you guys are sorting it out. Nvidia-docker stuff? Or messing with the docker run options to always install drivers to every container?

  12. I do have NVENC accelerated transcoding running reliably (from what I can tell) on my own system using an edited Jellyfin docker, a custom FFMPEG for it and the unraid scripts in my github.

    The only thing I need really is a less buggy ffmpeg build - that said if you supply your own FFMPEG compiled with NVENC support the way I've done it seems stable and I didn't see any crashes.

    I edited the dockerfile for jellyfin to add the appropriate drivers (to the docker) and used the kernel scripts I edited from yours to add the drivers to Unraid itself (the host).

    @CHBMB not sure if this will help you at all with your development?

  13. On 1/14/2019 at 12:18 PM, oneclear said:

    I want to say thank you to everyone involved in achieving this computer saving modification. I have spent quite some time trying to passthrough an Intel 4 port nic for a Pfsense VM on my Microserver G8, yesterday morning I was contemplating selling it off and building something else. I came across this thread and swapped the bzimage, saved me a lot of money and trouble. I am on 6.6.6 Stable. I really do appreciate the time and effort put in and just wanted to say thank you.

    +1 from me.

    • Like 1
  14. I have now sorted this out and it is working with JellyFin.

    The short answer is, you must install the GPU drivers to Unraid (i.e. you must recompile the kernel with the nvidia modules) reboot to enable them, then reinstall the nvidia  GPU and CUDA drivers every time you reboot.

    You must then passthrough all the /dev/dri and /dev/nvidia* devices to your docker.

    The docker itself then needs the same versions of the GPU and CUDA drivers installing.

    You then  need a supported program e.g. ffmpeg compiled with NVENC in order to use the acceleration.

    At this point you can query nvidia-smi on the host or in the docker to get the status of any GPU usage.

    You can see the status of my own hacky scripts here: https://github.com/Aterfax/nvidiadriversonunraidinstaller

    Wrt Emby - it should already have the NVENC enabled  ffmpeg, but I do not know if it has the nvidia drivers (GPU and CUDA) installed in the image.

    I also do not know if these drivers are fully necessary for the docker to use the GPU.

    I believe that eventually CHBMB will release this: https://github.com/CHBMB/Unraid-NVIDIA-Plugin

     

    This would then streamline the install of host Nvidia drivers.

  15. 13 minutes ago, Jerky_san said:

    Try loading without plugins

    Safe mode? Or is there a specific way to do so?

    Removed all plugins - now have access via chrome on my phone but not from my desktop in Edge, Chrome or Firefox.

    Has something like fail2ban been added that has blacklisted my IP?

×
×
  • Create New...