Jump to content

aterfax

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

6 Neutral

About aterfax

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Personally I have to try to integrate the nvidia patch, rmrr patch and dockerswarm patching so I understand where you are coming from. Don't worry about delays, the effort is much appreciated.
  2. In the short term you could compile it yourself with the RMRR patch - but I'd imagine you will come across whatever the issue is in terms of compatibility. https://github.com/AnnabellaRenee87/Unraid-HP-Proliant-Edition Edit: I might be able to help with maintaining it?
  3. For users who want the letsencrypt in Poste IO working but are already using a letsencrypt docker, all you need to do is share the .well-known folders between your Poste IO and letsencrypt docker i.e. in the Poste IO docker config:
  4. Yes, wouldn't have posted about how to do it if it didn't work
  5. We should probably summon @CHBMB for his thoughts on that for the unraid nvidia plugin then.
  6. I think the custom packaging is necessary for the nvidia stuff in order for acceleration to be passed through to dockers (i.e. the CUDA libraries, which either need to be present in the install bzroot on the USB or to be installed at boot time. WRT to modules, I agree we could do with the Unraid devs maintaining and standardising a repo of kernel modules afaik. (That way there's a chain of trust in the compilation rather than just random users and modules can be tested/tweaked to match releases if needed.)
  7. Probably should have added - I have this working via doing the following: Install from the nvidia unraid plugin - boot and check it works! i.e. GPUs showing up with 'nvidia-smi' Extract and save the nvidia kernel modules from the bzmodule provided by the nvidia unraid plugin. They will be in the KERNELVERSION/kernel/drivers/video Keep these files for later. If you need to use the HP RMRR patch then apply the following ensuring you change the file path to the correct one between step 1 and step 2: ##Make necessary changes to fix HP RMRR issues sed -i '/return -EPERM;/d' ./kernel/drivers/iommu/intel-iommu.c Compile and install the bzimage-new and bzmodules-new using the method above in this thread with the correct options for IP_VS support replacing the existing files in /boot (they will be called bzimage and bzmodules - make a backup just in case!!!) Boot up and 'modprobe ip_vs' then check it is working via docker swarm. (Modprobe might not be needed but I didn't see it loaded...) For testing I hosted a default nginx on my nodes then used curl to check they were alive with: 'curl localhost:8888' docker service create --name my_web \ --replicas 3 \ --publish published=8888,target=80 \ nginx Assuming you get the standard nginx response from each node, it should be working. Now to get nvidia stuff working again. Now navigate back to where you saved the nvidia modules and 'insmod nvidia.ko' and the remaining modules. Now check that they are loaded and your GPUs show up with 'nvidia-smi' I've had no success trying to include the nvidia stuff into the former compilation as yet but I suspect it is absolutely key for you to ensure you are using the bzroot file from the unraid nvidia plugin as it includes some nvidia/CUDA software required. I've added my working bzmodules and bzimage that I compiled with RMRR patch to the post so people can skip compiling if they wish. Also added the nvidia modules to the post so you don't need to extract them with 7zip etc... bzmodules bzimage nvidia-modeset.ko nvidia-uvm.ko nvidia.ko nvidia-drm.ko
  8. Hi all, I cannot see any downsides/issues considering people have been doing this to enable docker swarm support. Should be a simple change to the .config file used for compiling the unraid kernel by devs. Could we get IP Virtual Server support turned on in the kernel as per: This should only require the IP_VS options enabling shown with current status below i.e. - Generally Necessary: - CONFIG_NETFILTER_XT_MATCH_IPVS: missing - CONFIG_IP_VS: missing Optional: - CONFIG_CGROUP_HUGETLB: missing - CONFIG_NET_CLS_CGROUP: missing - CONFIG_CGROUP_NET_PRIO: missing - CONFIG_IP_VS_NFCT: missing - CONFIG_IP_VS_PROTO_TCP: missing - CONFIG_IP_VS_PROTO_UDP: missing - CONFIG_IP_VS_RR: missing - CONFIG_EXT4_FS: enabled (as module) - CONFIG_EXT4_FS_POSIX_ACL: enabled - CONFIG_EXT4_FS_SECURITY: missing 
  9. @CHBMB looks like I am not the only one with nvidia and needing some kernel compile options to support docker swarm. Have to say that my compile is further complicated by the requirement for the HP RMRR patch and the nvidia support!
  10. I'm trying to compile with support for both nvidia and turning on CONFIG_NETFILTER_XT_MATCH_IPVS in order to support docker swarm. I know you probably want to keep them closed source to prevent circumvention of nvidia restrictions but I can tell you through testing. Having access to your kernel compile scripts is not needed to achieve this. Could you open source the scripts or give me a hand in the order of operations in order to compile getting this working? At the moment this is proving problematic due to the lack of the nvidia modules in the compiled output. Edit: not the only one apparently -
  11. You can find the maximum concurrent sessions limit for all nVidia cards here: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
  12. Ha, no worries I appreciate the work. Since I sorted mine by with host drivers and loading drivers in the dockers I'm kinda curious about how you guys are sorting it out. Nvidia-docker stuff? Or messing with the docker run options to always install drivers to every container?
  13. Well, my bzroots are the same size nvidia patched and default unraid so I'm not convinced? 92,576 kb?
  14. I thought they got put into bzmodules? Saw nvidia modules under the video directory in modules during the kernel build process. Bzroot doesn't change size when patched for nvidia drivers?
  15. I do have NVENC accelerated transcoding running reliably (from what I can tell) on my own system using an edited Jellyfin docker, a custom FFMPEG for it and the unraid scripts in my github. The only thing I need really is a less buggy ffmpeg build - that said if you supply your own FFMPEG compiled with NVENC support the way I've done it seems stable and I didn't see any crashes. I edited the dockerfile for jellyfin to add the appropriate drivers (to the docker) and used the kernel scripts I edited from yours to add the drivers to Unraid itself (the host). @CHBMB not sure if this will help you at all with your development?