Leaderboard

Popular Content

Showing content with the highest reputation on 05/25/20 in all areas

  1. This is the support thread for multiple Plugins like: AMD Vendor Reset Plugin Coral TPU Driver Plugin hpsahba Driver Plugin Please always include for which plugin that you need help also the Diagnostics from your server and a screenshots from your container template if your issue is related to a container. If you like my work, please consider making a donation
    1 point
  2. Hello, i have set a Unifi Controller docker and gave it his own ip address, but i also want to make a static assignment on pfsesne. Do anyone know if dockers change mac address when they are updating or whenever? Except in the case i delete and reinstall them i guess. Is there any way to set a custom mac address? --SOLUTION-- I should do a better web search before, for anyone who might need this.. Edit container and on extra parameters add "--mac-adress 02:42:xx:xx:xx:xx" use a mac from a range of 02:42:ac:11:00:00 to 02:42:ac:11:ff:ff as this address are meant for dockers. I dont know it if works for macs out of this range
    1 point
  3. About OpenEats is a recipe management site that allows users to create, share, and store their personal collection of recipes. The usage for the app is intended for a single user or a small friends/family group. Demo: https://open-eats.github.io/ GitHub: https://github.com/open-eats/OpenEats In my hunt for a self-hosted recipe manager on unRAID, I found a lot of interest in people wanting an OpenEats container, but due to the offical OpenEats containers being split into 3 different containers (nginx, web, and API) it didn't seem like anyone had it successfully running. I took the liberty of trying to get this done for everyone in the community! I found a request for this type of setup (single dockerfile) on GitHub and decided to start diving in to see if I could get this running on unRAID through a reverse proxy, and here we are! I based my template for this container off of this GitHub repo and this Docker Hub repo. Requirements MariaDB (I recommend LinuxServer's container) or compatible MySQL server. Note that there IS a version of this container that comes with MariaDB baked in but it is NOT currently functioning. It is the :mariadb tab, but currently attempting to run it will get you a command failure due to +x permission error. I've reached out to the devs to fix this hopefully! I would recommend the separate MariaDB container anyways, regardsless! Setup MariaDB: You simply need to setup a MariaDB database. I would suggest just calling it "openeats" and you can attach permissions for a user to that database. You should be able to use Adminer (there's a CA docker container already!) if you need a GUI-based way to add databases/users/permissions, etc. Once you have your database, user, and password setup, just put it into the corresponding spots on this docker container's template. Alternatively, you CAN use the SQL root user/password, but I do not recommend that, it's better to have a user with permissions only on this single database. First Run: Please wait while the container creats all the necessary tables in the SQL database. This can take 5-10 minutes or so and the container will have NO log output when it does this. Please be patient, if you interrupt this by stopping the container it will leave you with a broken database! You'll know it's done either when the site is fully working or when you start seeing log output for the container. Users & Groups: After you get everything running, you can login with your sueruser account and you can even create other users who will have access to the site. I would suggest going in and create a group and granting the permissions you want them to have, and then adding users to those groups if you plan on having more than just yourself access this OpenEats instance. Variables OPENEATS_VERSION: This will let you choose a specific version, you can choose "master" here to get the latest version direct from github on each run/restart of the container, or you can choose a speciric version from here as well. If you want to run a specific version just enter the version number for example 1.5.0 ALLOWED_HOST: This will default to * (all), however to be more restrictive you can list your IP address of your unRAID server (allows local LAN access), along with any reverse-proxy domain you want to access from (for access over the Internet), e.g. it may look like this: 192.168.1.10, openats.yourdomain.com DJANGO_SECRET_KEY: This needs to be a long randomly genrated string for cryptography. See here for more information. Other Variables: A list of other OpenEats variables is available here. Reverse Proxy This should work just fine in either an Nginx reverse proxy (like LinuxServer's Let's Encrypt container, or Nginx Proxy Manager). I would recommend setting this us as a subdomain, such as https://openeats.yourdomain.com My current Let's Encrypt reverse proxy server block looks like this. #OPENEATS server { listen 80; server_name openeats.yourdomain.com; return 301 https://openeats.yourdomain.com$request_uri; } server { listen 443 ssl http2; server_name openeats.yourdomain.com; location / { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.10:8760/; } } Known Issues When running behind a reverse proxy, I cannot seem to get the admin page to load. A workaround for now would be to just access it locally to make changes you need, since there shouldn't be too much you regularly have to do there, I don't see this as being a huge issues, but I will try to figure out why this is and work towards a fix. I didn't see this as a reason to delay release of this template/container because it works fine locally still. When creating a recipe it will show a broken image as the placeholder until you upload your own. Saving it without uploading one will still auto-generate a placeholder one after saving. Cosmetic-only issue. Creating a recipe link does not seem to currently work, it just gets stuck at saving the recipe and nonthing ever happens. Thank you /u/ziggie216 on Reddit for reporting this.
    1 point
  4. EDIT (March 9th 2021): Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum. ### Hey Guys, First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release). Hardware and software involved: 2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool. ### TLDR (but I'd suggest to read on anyway 😀) The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly. This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure). Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem. Possible idea for implementation proposed on the bottom. Grateful for any help provided! ### I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below. So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active. Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either. This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM). After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. Finally I decided to put the whole image out of the equation and took the following steps: - Stopped docker from the WebGUI so unRAID would properly unmount the loop device. - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint - Created a share on the cache for the docker files - Created a softlink from /mnt/cache/docker to /var/lib/docker - Started docker using "/etc/rd.d/rc.docker start" - Started my BItwarden containers. Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore. I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!) Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁) I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out. Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down. Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue. I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk. In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes. I'm not attaching diagnostic files since they would probably not point out the needed. Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section. Thanks though for this great product, have been using it so far with a lot of joy! I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick, Cheers!
    1 point
  5. For what it's worth it seems you have to add a comment. You cannot JUST do a rating.
    1 point
  6. Since nobody directly answered this. Parity drives should spin down unless something is writing to the array. So it depends on how frequently your array is written. I can certainly imagine use cases where parity seldom spins up.
    1 point
  7. yeah i saw that but it doesnt actually state that limetech wont support you, just that you need to specify its a custom kernel in your post, makes it kinda sound like limetech MAY support you, just put viewpoint you understand :-). edit -saw your alteration, looks good 🙂
    1 point
  8. Appreciated, this was no easy task since I had zero understanding of how to compile a kernel now I know a little bit... I was also looking for a way to upgrade my drivers a little bit faster and also to add custom kernel modules the 'easy' way (I totally understand that linuxserver can't build a new image for each new driver version...) If you got any suggestions feel free to contact me. Btw: I uploaded the container already but it will take a bit to update in the CA App.
    1 point
  9. can i be the first to say, wow!, i can see how this will be VERY useful for people wanting to pass through hardware to containers, and now being able to build out an image is impressive work indeed!, and of course takes the load of LSIO to produce the custom image every time unraid bumps the version, a real game changer!.
    1 point
  10. As others have posted here, you can't blame Plex, or any single Docker. Something is taking normal writes and amplifying them massively. In my case stopping Plex makes a difference, but only reduces it by about 25%, and rampant writes continue. About 1 GB a minute as I look at it right now! I don't want anyone to think the problem has been solved and the cause was Plex. That isn't the case. It's much more fundamental than that.
    1 point
  11. Icon Collection 1a This is my first try at a Category Docker Folder icons. I have done it in two color versions. Please understand that I only use the "Light Theme". I am not sure how these would look like on the Dark Theme. I am pasting a zip file that contains: Icons at 128px x 128px The layered Photoshop file for anyone to modify Preview image of the collection (image above. Anyone w/ rudimentary PS skills can easily create variations. I would love constructive criticism.... please don't say you like if you don't. If you don't please tell me why. Thanks, H. PS: Icons are much better quality than what the image above shows. I think the forum over-compresses the jpgs.
    1 point
  12. This is really good. I have brought this up to my job plenty of times. If you have multiple languages you can cater to another set of group and grow the business. I am fluent is Spanish and English and I believe this would take Unraid to another level. Good job Unraid. Looking forward to future updates.
    1 point
  13. Yes Sir! That's good, also probably a good idea to create an international section in the forums, with a sub-forum for each supported language.
    1 point
  14. Go on the container Edit page, on the top right, press on "basic view" (to switch to advanced). Find the "Extra Parameters" field. Either add or edit the `--hostname` to use the hostname you want.
    1 point
  15. Yes, but that is too much to write when in a hurry
    1 point
  16. And .... that sorted it! Must admit I just chucked in both hbas a few years back never expecting to hit 125TB. Once disabled all 14 drives accessible, many thanks!
    1 point
  17. Looks good. Now you have appdata, domains, and system all on cache, and all other shares on the array. Since your cache is small, I recommend setting all shares except those to cache-no. Then go to Settings - Docker and enable dockers again. Later you can decide if it makes sense to use cache for anything else, but keep it limited.
    1 point
  18. Fastest way would be to rebuild one drive at a time outside the enclosure, or 2 if you have dual parity.
    1 point
  19. Okay thank you I found the access to the console
    1 point
  20. As long as you disable the Docker and VM services, then yes. It's not enough to just stop the containers and VM's. The docker and VM's tab should be gone from the GUI during the move.
    1 point
  21. hi first thanks great docker nice interface i just setup my first reverse proxy domain and all work great hosted at clouflare, however when i scan my site on securityheaders i get this red warning Headers Content-Security-Policy X-Frame-Options Referrer-Policy Feature-Policy can someone please tell me how to fix this to get green, thanks in advanced
    1 point
  22. set your machine type to q35-2.6, its due to old qemu drivers in pfsense.
    1 point
  23. Welcome, and good luck!
    1 point
  24. I ran into MCEs with my old server. The recommendations from here were that I contact the CPU & motherboard vendors to see if there was anything they could do/help with. I ended up having to replace the CPU. Not saying that's guaranteed to be your only course of action here, but bracing you for the worst. Wait for someone more knowledgeable to chime in, but you may want to at least start checking with your vendors.
    1 point
  25. You should move the rest of appdata back to cache now that you have room. Go to Settings - Docker and disable. This will make sure all docker related files are closed so they can be moved. Change the appdata share to cache-prefer, run Mover, wait for it to complete. Post new diagnostics.
    1 point
  26. plex does often take a lot of space in appdata, but that seems excessive. Mine is only taking 11G, but my media collection is perhaps modest by some standards. Where do you have plex transcoding? By default transcoding goes to a subdirectory in the plex appdata. There are other ways to set that up.
    1 point
  27. You need to do that same du investigation in /mnt/user/appdata, since it's not just on the cache drive, it's scattered around the entire array.
    1 point
  28. https://support.plex.tv/articles/202529153-why-is-my-plex-media-server-directory-so-large/
    1 point
  29. Try to add the following line under the Advanced tab: location = /{return 301 $scheme://$http_host/ubooquity/;}
    1 point
  30. One or more of your containers is keeping the working data that should be in other shares in the appdata share. You need to determine which subfolders in appdata are taking up so much space, and then we can look at the configuration for those apps.
    1 point
  31. I don't even think thats possible, the game needs it's own port and you simply can't share it with the https port (think the other way around, you also can't share the http port 80 with the https port 443...). One thing to think about is also if you encrypt the data through https when you proxy it through lets encrypt it's possible that the game client don't know how to handle https traffic... A game server is simply not a website/server. To connect to the console please read the discription of the container there is the answer. I think you also can issue this commands from the ingame console when you are the admin/server OP.
    1 point
  32. As it says, domains is where VMs are saved, and people usually keep that on cache. And domains is all on cache according to your Diagnostics. Up to you whether you want to recover that small amount of space. Still unclear what is using up all that cache space though. Are you sure you don't have any files at the top level of cache, not inside any folder, perhaps accidentally created? Go to Shares - User Shares and click the Compute All button. You will probably have to wait a few minutes for the result. If it hasn't displayed the results after several minutes refresh the page. Then post a screenshot. Also, go to Main - Cache Devices and click on the folder icon at the far right under View. Post a screenshot.
    1 point
  33. OK. Media is all on the array now, but your cache is still mostly full. Do you have any VMs? I now see what the main culprit actually is though. Your docker image is 80G. 20G should be more than enough, and if it's not you have one or more of your docker applications misconfigured. I see you're currently using only 13G so that isn't unreasonable. Why did you set it to 80G? Have you had problems filling docker image? Making it larger won't fix the problem of filling docker image, it will just make it take longer to fill. Go to Settings - Docker, disable and then delete the docker image from that same page. Change it to 20G and enable to recreate. Then go to the Apps page and use the Previous Apps feature. It will let you reinstall your dockers exactly as they were. Then post new diagnostics.
    1 point
  34. Deluge suddenly won't connect to peers anymore. Sonarr/Radarr are still pushing torrents to it, but they just sit there with the status "Downloading" stuck at 0% not making any connections. "Tracker Status" is blank, but I know there's nothing wrong with the tracker. I'm not using a VPN, I've always had that turned off. • I can connect to the same torrents from my PC just fine, so I know it's not an issue at the network/tracker level. • Tested multiple trackers/torrents, same issue. • Tried rebooting the Deluge Docker, and rebooting the entire unRaid server just to be safe. No dice. • No errors in log. I have Deluge set to automatically update, so I'm not sure if an update came out recently that's causing this... because I haven't made any changes to my setup. From what I can tell this all started about 4 days ago. Does anyone have any idea what's going on? Update: It must have something to do with an update, because by rolling back to "binhex/arch-delugevpn:2.0.3_23_g5f1eada3e-1-03" I was able to get everything working again (after rechecking all of my torrents). Not sure if this is a known issue, or if there's a better workaround. Does anyone know what happened between that version and the current one that caused it to break on me? Does this only work for people using VPNs now or something?
    1 point
  35. Your cache is too small to use for much except appdata and other docker/VM related shares (domains, system). Stop writing to your server until you get everything else moved from cache. domains and system are all on cache where they belong so they are looking good. Don't change anything about them. appdata is cache-prefer, which is how it should normally be, except it has apparently overflowed and has files all over the array now. Set appdata to cache-only for now so it will be ignored by mover until we get other things moved off cache. You have a share anonymized as M---a, probably Media, which I'm sure is the main culprit. You currently have that set to cache-no, which mover ignores, and it has files on cache. Probably you set it to cache-no trying to fix your problem. Set that share to cache-yes. Then run Mover (Array Operation - Move Now). Wait for it to complete then post new diagnostics so we can check your progress. There are going to be some additional steps required to get things as they should be.
    1 point
  36. I hope you are planning to recruit moderators from each of the supported languages.
    1 point
  37. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post.
    1 point
  38. Hi And thank you for the good work you do The Terraria-TShock Docker I'm trying to run version 1.4.0.2 but it doesn't pick up the latest version of TShock (v4.4.0-pre3) and Terraria (The Game Version variable is set to: 1.4.0.2) The log: TShock 4.3.26.0 (Mintaka) now running. AutoSave Enabled Backups Disabled Welcome to TShock for Terraria. Initialization complete. [Server API] Info Plugin TShock v4.3.26.0 (by The TShock Team) initiated. Terraria Server v1.3.5.3 And If I manually overwrite the files it fails So my question is Is it possible to choose the TShock version yourself with a variable?
    1 point
  39. Built ZFS 0.8.4 for unRAID 6.8.3 & 6.9.0-beta1 (kernel 5.5/5.6 officially supported in this ZFS version) The upgrade is done when you reboot your server Changelog can be found here: https://github.com/openzfs/zfs/releases/tag/zfs-0.8.4
    1 point
  40. Even though I created a new "Path" Variable and set the Container Path to /config/www/gallery/galleries/ in the docker, this is still uploading photos. I'm assuming this is making duplicate copies of my photos off of my array and uploading them to the appdata/piwigo? Is there anyway to just point the docker to my Photo share and let it just pick from there to create the albums, etc?
    1 point
  41. I started understanding how to put this all together and wanted to throw some info out there for those that need it. First of all, I would recommend that if you are going to use the Vanilla version of MC (what binhex has provided) make sure the docker not running browse out to your appdata\binhex-minecraftserver\minecraft folder edit the server.properties file with notepad++ (I'm using Windows for all of this) Change the following settings if you like: difficulty=[easy|hard] gamemode=[creative|adventure|survival] force-gamemode=[true|false] level-name=world <=== This is the folder name all your game data is saved into motd=Logon Message <=== Message displayed when you log into the server from a MC client Now, if you are like me, you want to use Forge or Bukkit. In this case create a folder on your C:\ drive called "Minecraft" download the minecraft server file from HERE, and place it into C:\Minecraft (believe it's called 'minecraft_server.1.14.4.jar') double-click the file, and wait for a minute as it downloads some MC server files. When it stops, edit the EULA.txt file, and change the line inside from false to true eula=true Double-click on the minecraft_server.1.14.4.jar file again and wait for it to finish type in "/stop". This will kill the minecraft server. Download forge for the version of MC server you just downloaded (You want the INSTALLER button in the recommended box on the site) Place this file (forge-1.14.4-28.1.0.jar) in C:\Minecraft Double click on this file. Select SERVER and change the path to C:\Minecraft Let it perform its magic Once finished, again, shut it down with "/stop" Now copy the contents of C:\Minecraft to appdata\binhex-minecraftserver\minecraft Delete the file appdata\binhex-minecraftserver\perms.txt (this will restore the default permissions to the files you copied over) In Unraid, edit the docker and create a new variable Click SAVE and then APPLY/DONE Fire up the docker This will use the forge jar file within the docker container, instead of the vanilla jar file. From this point, if you want to add resource packs or mods, you can download them and install into the "mods" or "resourcepacks" folder as necessary. These folders may need to be created. A good mod to verify that your server is working is FastLeafDecay-Mod-1.14.4.jar. You can find it HERE. Chop a tree down and it should dissolve a lot quicker than normal. I would also recommend adding one or two mods at a time and testing. Let me know if you'd like more details on the above.
    1 point
  42. I wanted to summarize how I got Mullvad working with DelugeVPN as I had to piece together several "solutions" from different comments in this thread and there was some incorrect info; likely old. First go to https://mullvad.net/en/download/config/?platform=linux (you may have to sign into your Mullvad account first), select a region, leave "Use IP addresses" and "Connect via our bridges." left unchecked, then click "Download". This will download a .zip file with all your config and cert files. The .zip file will contain a folder with the following files: mullvad_ca.crt (the cert file) mullvad_<whatever region you selected>.conf (the config file) mullvad_userpass.txt (a text file with your account number) update-resolv-conf (some file with no file extension that I have no idea what it does lol) You will extract these files later after we do the initial set up for Deluge VPN. Next grab binhex-delugevpn from Community Applications (you've probably already done this if so just click "edit" on the binhex-delugevpn docker container) and adjust the following container settings: Host Path 2 (Container Path: /data) set this to your downloads location; usually a share you created. Mine is /mnt/user/Downloads. If you are using Sonarr or Radarr it should be the same /data path they use. Key 1 (VPN_ENABLED) should be set to "yes" Key 2 (VPN_USER) should be your Mullvad account number; a 16 digit number you received when you set up your Mullvad account which also can be found in mullvad_userpass.txt. Do not include the "m" as shown in the .txt file, just the 16 digit account number. Key 3 (VPN_PASS) same account number as Key 2, no "m". Key 4 (VPN_PROV) set to "custom" Key 6 (STRICT_PORT_FORWARD) set to "yes" Key 8 (LAN_NETWORK) put the first 3 numbers of your routers IP (most likely 192.168.1) followed by .0/24 (example "192.168.1.0/24") Everything else can be left alone. Click apply. It will pull the container and run it. Open a file explorer and navigate to <your server>/appdata/binhex-delugevpn/openvpn/ Now extract the files: mullvad_ca.crt mullvad_<whatever region you selected>.conf mullvad_userpass.txt update-resolv-conf from the .zip you downloaded from Mullvad into <your server>/appdata/binhex-delugevpn/openvpn/ change the file extension of mullvad_<whatever region you selected>.conf to mullvad_<whatever region you selected>.ovpn Open mullvad_<whatever region you selected>.ovpn with a text editor and add the following lines to the bottom: pull-filter ignore "route-ipv6" pull-filter ignore "ifconfig-ipv6" Save the file. Restart the binhex-delugevpn container That should do it. I hope this helps.
    1 point
  43. TEST BUILDS FOR NVIDIA/DVB COMBINED People have asked if I'll produce a combined Nvidia/DVB build. Here's the thing, I have no intention of producing separate and combined builds, the workload is just too much for something I have no need of. It would need 8 builds on top of the Nvidia build to do so. I will however produce a combined build, depending on two conditions. 1. People don't mind an increased download size. 2. It works reliably. However I no longer have any DVB hardware so I can't test. So I've produced some combination builds for people to try out. I am only doing this for v6.7.1rc2, so if nobody tests each build then it's not going to happen. So if you want it, YOU need to test it. HOW TO USE 1. Download the Nvidia build of v6.7.1rc2 (Important - Do not use any other version) 2. Once the build has downloaded and the window indicating the copying to flash has happened and before you reboot, download one of the attached zip files depending on which DVB build you use, unpack it and copy across the bzmodules and bzfirmware files to your flash disk. 3. Reboot I know that the Nvidia Plugin will not report the correct version number, that's an easy fix in the future if this works. Once you've rebooted I need to know two things for each build. 1. Does the Nvidia hardware encoding work with Plex/Emby/Jellyfin? 2. Does the DVB hardware work? libreelec-nvidia-v6.7.1rc2.ziptbs-os-nvidia-v6.7.1rc2.ziptbs-crazycat-nvidia-v6.7.1rc2.zipdd-nvidia-v6.7.1rc2.zip
    1 point