Jump to content

aterfax

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by aterfax

  1. Your poor grammar makes it very difficult to read. Try using bullet points, commas and paragraphs to separate your different points. Post diagnostics - what space invader video? Pastebin the error 43 stuff. Do you mean this error 43? https://mathiashueber.com/fighting-error-43-nvidia-gpu-virtual-machine/
  2. Personally I have to try to integrate the nvidia patch, rmrr patch and dockerswarm patching so I understand where you are coming from. Don't worry about delays, the effort is much appreciated.
  3. In the short term you could compile it yourself with the RMRR patch - but I'd imagine you will come across whatever the issue is in terms of compatibility. https://github.com/AnnabellaRenee87/Unraid-HP-Proliant-Edition Edit: I might be able to help with maintaining it?
  4. For users who want the letsencrypt in Poste IO working but are already using a letsencrypt docker, all you need to do is share the .well-known folders between your Poste IO and letsencrypt docker i.e. in the Poste IO docker config: This will not work if your domain has HSTS turned on with redirects to HTTPS (or this was the case with the version of letsencypt in the docker a while ago as it was reported here: https://bitbucket.org/analogic/mailserver/issues/749/lets-encrypt-errors-with-caprover ) You can instead mount the default certificate files in the docker directly to the certificates from the letsencrypt/SWAG docker. To be explicit with my volume mounts for SSL working: /data/ssl/server.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/cert.pem /data/ssl/ca.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/chain.pem /data/ssl/server.key → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/privkey.pem I do not recall the exact details of why the above is optimal but I suspect that Poste is handling making it's own full chain cert which results in some cert mangling if you do give it your fullchain cert rather than each separately (various internal services inside the docker need different formats) - I believe that without the mounts as above the administration portal will be unable to log you in.
  5. Yes, wouldn't have posted about how to do it if it didn't work
  6. We should probably summon @CHBMB for his thoughts on that for the unraid nvidia plugin then.
  7. I think the custom packaging is necessary for the nvidia stuff in order for acceleration to be passed through to dockers (i.e. the CUDA libraries, which either need to be present in the install bzroot on the USB or to be installed at boot time. WRT to modules, I agree we could do with the Unraid devs maintaining and standardising a repo of kernel modules afaik. (That way there's a chain of trust in the compilation rather than just random users and modules can be tested/tweaked to match releases if needed.)
  8. Probably should have added - I have this working via doing the following: Install from the nvidia unraid plugin - boot and check it works! i.e. GPUs showing up with 'nvidia-smi' Extract and save the nvidia kernel modules from the bzmodule provided by the nvidia unraid plugin. They will be in the KERNELVERSION/kernel/drivers/video Keep these files for later. If you need to use the HP RMRR patch then apply the following ensuring you change the file path to the correct one between step 1 and step 2: ##Make necessary changes to fix HP RMRR issues sed -i '/return -EPERM;/d' ./kernel/drivers/iommu/intel-iommu.c Compile and install the bzimage-new and bzmodules-new using the method above in this thread with the correct options for IP_VS support replacing the existing files in /boot (they will be called bzimage and bzmodules - make a backup just in case!!!) Boot up and 'modprobe ip_vs' then check it is working via docker swarm. (Modprobe might not be needed but I didn't see it loaded...) For testing I hosted a default nginx on my nodes then used curl to check they were alive with: 'curl localhost:8888' docker service create --name my_web \ --replicas 3 \ --publish published=8888,target=80 \ nginx Assuming you get the standard nginx response from each node, it should be working. Now to get nvidia stuff working again. Now navigate back to where you saved the nvidia modules and 'insmod nvidia.ko' and the remaining modules. Now check that they are loaded and your GPUs show up with 'nvidia-smi' I've had no success trying to include the nvidia stuff into the former compilation as yet but I suspect it is absolutely key for you to ensure you are using the bzroot file from the unraid nvidia plugin as it includes some nvidia/CUDA software required. I've added my working bzmodules and bzimage that I compiled with RMRR patch to the post so people can skip compiling if they wish. Also added the nvidia modules to the post so you don't need to extract them with 7zip etc... bzmodules bzimage nvidia-modeset.ko nvidia-uvm.ko nvidia.ko nvidia-drm.ko
  9. Hi all, I cannot see any downsides/issues considering people have been doing this to enable docker swarm support. Should be a simple change to the .config file used for compiling the unraid kernel by devs. Could we get IP Virtual Server support turned on in the kernel as per: This should only require the IP_VS options enabling shown with current status below i.e. - Generally Necessary: - CONFIG_NETFILTER_XT_MATCH_IPVS: missing - CONFIG_IP_VS: missing Optional: - CONFIG_CGROUP_HUGETLB: missing - CONFIG_NET_CLS_CGROUP: missing - CONFIG_CGROUP_NET_PRIO: missing - CONFIG_IP_VS_NFCT: missing - CONFIG_IP_VS_PROTO_TCP: missing - CONFIG_IP_VS_PROTO_UDP: missing - CONFIG_IP_VS_RR: missing - CONFIG_EXT4_FS: enabled (as module) - CONFIG_EXT4_FS_POSIX_ACL: enabled - CONFIG_EXT4_FS_SECURITY: missing 
  10. @CHBMB looks like I am not the only one with nvidia and needing some kernel compile options to support docker swarm. Have to say that my compile is further complicated by the requirement for the HP RMRR patch and the nvidia support!
  11. I'm trying to compile with support for both nvidia and turning on CONFIG_NETFILTER_XT_MATCH_IPVS in order to support docker swarm. I know you probably want to keep them closed source to prevent circumvention of nvidia restrictions but I can tell you through testing. Having access to your kernel compile scripts is not needed to achieve this. Could you open source the scripts or give me a hand in the order of operations in order to compile getting this working? At the moment this is proving problematic due to the lack of the nvidia modules in the compiled output. Edit: not the only one apparently -
  12. How odd. I fixed it anyway now. I think I must have customised it and it got re-installed then.
  13. Binhex, did you change the default /data mount to /media ? I only just noticed this as all my downloads started failing!
  14. Tried unsetting and re-setting the MTU on the bond. Makes no difference, bond MTU stuck at 1500. i.e. eth1 1500 eth0 1500 bond0 1500 br0 is 9198. It is potentially an issue with nginx detecting the wrong MTU from one of these interfaces and causing packet fragmentation causing the client devices inability to connect with their MTU set jumbo (9198 etc...)
  15. I noticed the issue because my hardware supported the jumbo frames and was working fine prior to the Unraid upgrade. My devices that stopped connecting correctly with Unraid still had jumbo frames turned on at 9014 MTU, as soon as I turned off the jumbo frames on each device they started working correctly. Edited my image to show I am doing balance-rr - at the moment I also have one interface unplugged in that set.
  16. If someone else can reproduce can you mark this more urgent? It's a pain in the ass to work out why everything stopped working on upgrade when half your networking services work and half of them don't.
  17. Simple as: I have the MTU set at 9014, the MTU on the interface via ifconfig shows 1500. I am particularly irritated as this has been causing massive fragmentation/error issues on my other devices that had the MTU config actually set + working. This also actually breaks the web GUI on those devices which is probably something specific to Nginx. SSH works fine however.
  18. Also would like this feature - a proper form login page.
  19. Odd problem here, Deluge is running, OpenVPN is running. The docker image is happily connected to the VPN provider and I can ping and curl etc... no problem and it shows the correct VPN IP address. Deluge however cannot connect to any trackers, but can connect to peers no problem and is downloading torrents ok (sans trackers.) Seeing as the docker container can connect online no problem via the VPN I have no idea what is wrong. Ideas? Edit: I suspect the provider is blocking trackers...
×
×
  • Create New...