Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 05/05/20 in all areas

  1. 8 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottum of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with the nVidia drivers and also DVB drivers (currently DigitalDevices and LibreElec built in). nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Here you can download the prebuilt image: Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 440.82 | DD driver: 0.9.37 | LE driver: 1.4.0) Unraid Custom nVidia & DVB & ZFS builtin v6.8.3: Download (nVidia driver: 440.82 | DD driver: 0.9.37 | LE driver: 1.4.0 | ZFS version: 0.8.4) Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 440.82) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 440.82) Unraid Custom DVB builtin v6.8.3: Download (DD driver: 0.9.37 | LE driver: 1.4.0) Unraid Custom DVB & ZFS builtin v6.8.3: Download (DD driver: 0.9.37 | LE driver: 1.4.0 | ZFS version: 0.8.4) Unraid Custom ZFS builtin v6.8.3: Download (ZFS version: 0.8.4)
  2. 8 points
    Hey Guys, First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release). Hardware and software involved: 2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool. ### TLDR (but I'd suggest to read on anyway 😀) The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly. This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure). Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem. Possible idea for implementation proposed on the bottom. Grateful for any help provided! ### I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below. So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active. Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either. This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM). After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. Finally I decided to put the whole image out of the equation and took the following steps: - Stopped docker from the WebGUI so unRAID would properly unmount the loop device. - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint - Created a share on the cache for the docker files - Created a softlink from /mnt/cache/docker to /var/lib/docker - Started docker using "/etc/rd.d/rc.docker start" - Started my BItwarden containers. Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore. I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!) Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁) I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out. Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down. Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue. I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk. In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes. I'm not attaching diagnostic files since they would probably not point out the needed. Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section. Thanks though for this great product, have been using it so far with a lot of joy! I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick, Cheers!
  3. 7 points
    a little bird told me that they already were at beta8, huge changes coming....
  4. 6 points
    console into app vi /home/nobody/start.sh change the last line to read; cd /usr/lib/nzbhydra2 && /usr/lib/nzbhydra2/nzbhydra2wrapperPy3.py --datafolder /config/nzbhydra2 restart The image has changed to python3, but the startup script is referring to a wrapper that is nolonger in the image - I assume this will be fixed in the container by the maintainer Dave
  5. 5 points
    When is the next new unraid version available?
  6. 5 points
    v6.8.3 done Apologies, thought I'd done this at release time, but apparently I hadn't. Life been a bit hectic.
  7. 4 points
    This container needs a ConbeeII usb zigbee stick to work This container is for the deCONZ software from Dresden Elektronik. It is used to control a conbee zigbee usb stick and can be used with home assistant. Setup. 1. Without conbee usb stick plugged into the server run the following command in a terminal window ls /dev/ 2. Plug your conbee usb stick into the unraid server. Then run the above again. You will now see an extra device here. This is your conbee zigbee stick. Most likely ttyACM0 (unless you have maybe a zwave stick plugged in aswell then it might not be) 3. Now add the name of the stick to the template. (default is already ttyACM0) Add it to both usb conbee: and usb device name: 4. I think it best to set a static ip for the container. You can then access the container from http://xxx.xxx.xxx.xxx (the ip you set) 5. Now you can add your zigbee devices in the webui and connect deconz to home assistant for it to be able to access you zigbee devices.
  8. 4 points
    There will be a video series (soon) for setting up Home assistant in docker (rather than hassio vm) along with deconz, mosquitto etc
  9. 4 points
    As others have posted here, you can't blame Plex, or any single Docker. Something is taking normal writes and amplifying them massively. In my case stopping Plex makes a difference, but only reduces it by about 25%, and rampant writes continue. About 1 GB a minute as I look at it right now! I don't want anyone to think the problem has been solved and the cause was Plex. That isn't the case. It's much more fundamental than that.
  10. 3 points
    @ich777 has fearlessly began translating Unraid into German. If there are any other German speakers who would like to collaborate in this effort, please let us know here and I can provide further details and instructions. https://github.com/spencerjunraid/lang-de_DE
  11. 3 points
    This. UnRaid for me is a lesson in the KISS principle. (Keep It Simple, Stupid) I work in IT but I like using UnRaid at home, because of it's simplicity and it's ability to just keep functioning. I've tightened up the security, of course, but it's excellent for my limited home needs and I only have one docker exposed to the internet through nginx proxy manager for phone-backup purposes. It works wonderfully, it's been more stable than anything I've run at home previously and it runs on a shoebox, as my backup box can attest. I actually just built the backup box to buy another unraid license to support the developers. I keep recommending it to absolutely everyone - and I really appreciate the transparency about these security issues, and also the willingness to discuss why certain choices are made. It makes you trustworthy, so keep it up.
  12. 3 points
  13. 3 points
    This has been added as well as turning on/off the dashboard widget in 2020.05.20b. If you turn it off you can access it through the widget or under Tools->System Information.
  14. 3 points
    Sure, is there anything else? I'm getting my pen out... Let me write this down.
  15. 3 points
    Do we have an ETA on when unRAID will support NFSv4+? I've seen this request come up multiple times on here, and it looks like at one point, Tom even "tentatively" committed to trying to "get this into the next -rc release": Unfortunately, that was over 3 years ago. Do we have any updates on this? I believe adding support for more recent NFS versions is important because it is likely to resolve many of the problems we see with NFS here on the forum (especially the NFS "stale file handle" errors). I think that's why we also keep seeing this request come up over and over again. I understand where Tom is coming from when he says, "Seriously, what is the advantage of NFS over SMB?": The majority of the time, for the majority of users, I would recommend SMB. It's pretty fantastic as it exists today but, there are times when NFS is the better tool for the job. Particularly when the clients are Linux-based machines; NFS offers much better support for Unix operations (i.e. when you're backing up files to an unRAID share and it contains symbolic links). NFS also offers better performance with smaller files (i.e. those short, random-R/W-like file operations). Rereading my post, I hope this request doesn't come off as overly aggressive. That's certainly not the intent. I just wanted to provide some background on the request and advocate for continued NFS support on unRAID. NFS is still an important feature of unRAID. Thank you in advance for your consideration! -TorqueWrench
  16. 3 points
    Ussually mover is run by schedule but some times we run mover manually. When we run it manually it would be nice to know how long will take mover to copy all the files. I feel safe not working with files while mover is running. (sure is paranoia but I feel safe) Would it be possible to add some kind of % bar showing information about the mover process? At least I will know how long will it take. Thankyou Gus
  17. 3 points
    Just wanted to give my thanks to the team for the effort in 6.9! I'm patiently waiting to 5.x to leverage the new HW. I very much appreciate you working on this even in the general stress situation we're in... Keep up the good work!!
  18. 3 points
    If you store your docker appdata share and the VM ISO and System shares on an SSD (highly recommended) your array drives will not spin up just because dockers/VMs are in use unless they are configured to use array drives in some way. When accessing media with Plex for example, the necessary drives will be spun up, but, they can be configured to spin down when not in use while Plex still runs as a docker container on an SSD. See above as these shares could be cache only shares using the cache drive. Also, you could put them on an unassigned devices SSD. If you intend to use the cache drive for write caching of share data (configurable by share), how much benefit you would get from that depends on how often and how much data you write the array. Yes, if so configured. It will run periodic checks and let you know of problems through the notification system and GUI. Only you can fully answer that question, but, media storage, management and playback is a primary unRAID use case. Parity protection is a nice safeguard against disk drive failure, but, it is not a backup solution. You should also make plans to keep important data backed up.
  19. 3 points
    This empirically is due to the binhex-preclear install but only indirectly. When you install that plugin it asks you for volume mounts to give to the underlying docker daemon for mounting inside the docker container. Host Path 6 is /boot/config/plugins/dynamix/dynamix.cfg. This *should* be a file, I'm assuming, but didn't exist on my unraid. When you give docker a path that doesn't exist it will automatically create a folder given that path. So when I installed binhex-preclear the docker daemon created the folder /boot/config/plugins/dynamix/dynamix.cfg/ If you make this folder and refresh your WebUI you will see that it generates php errors described above. The *easiest* fix is: 1. Uninstall the binhex-preclear plugin/app 2. remove that path from your dynamix plugins folder (or at the very least move it to another name that *doesn't* end in .cfg) * You can move the offending directory out of the way via the terminal with the command: * mv /boot/config/plugins/dynamix/dynamix.cfg /boot/config/plugins/dynamix/dynamix.cfg.broken * Or if you want to remove it * rmdir /boot/config/plugins/dynamix/dynamix.cfg 3. Now refresh the WebUI and you'll see the errors are gone. Now if you want to reinstall the binhex-preclear plugin you should first create that file that the plugin wants to pass through to the docker container. From the terminal run the command: touch /boot/config/plugins/dynamix/dynamix.cfg [EDIT] Squid pointed out that if you make changes within Settings -> Display Settings this will create the file /boot/config/plugins/dynamix/dynamix.cfg so there's no need to enter the terminal if you don't want to. Then proceed with the install as you normally would and things should operate just fine.
  20. 3 points
    I'll work on getting it into the CA repo soon. Probably by next week. A few things though: I didn't end up implementing fail2ban yet, but still plan to. Caddy 2 is now in GA so I'll be working that out too; in a separate build.
  21. 3 points
    I've done a bunch of stuff this week. I'm not sure about being out of beta, but the code is starting to become more structured to a permanent standard which means we're getting closer to being out of beta. I think the app is pretty close to a point now where no more major changes will require a clean install or anything. There is one other thing I need to do yet and that is to move all the task queues to the database. That change may or may not yet require a clean install. Once that is done, I think I'll push unmanic out if beta. Today I'm testing Nvidia hardware encoding changes (not yet available via docker) so far so good.
  22. 3 points
    For the upcoming version the default driver is changed to "virtio-net". This means newly created VMs will use the revised driver, but existing VMs need manual adjustment.
  23. 2 points
    I just updated to the the nextcloud container to latest. After which i couldnt access nextcloud and was getting 400 bad request error So if anyone else has this error you can fix by rolling the container back To do that change the repository line in the template to linuxserver/nextcloud:18.0.4-ls81 as below
  24. 2 points
    With the upcoming Unraid 6.9 release, one of the exciting new features will be Multi-language Support. This new feature will allow the Unraid webgui to be displayed in a number of new languages! With this, Unraid will be available to many new users and will require new language specific forums for technical support help. If you are: a savvy, experienced Unraid user are proficient in languages such as: Spanish, French, German, Dutch, Mandarin, Arabic, Portuguese, (or others!) And are interested in becoming a moderator here in those languages, please reach out to me for more information! I look forward to your DMs and emails. If you have any questions, please feel free to ask them here. Cheers, Spencer
  25. 2 points
    Hi Guys, I really love GuildDarts Docker Folder plugin. I am stating this thread so that people can share their icons for the Docker Folders. Below I have pasted my first Collection. I hope to create more. Please feel free to share, add requests, ideas, suggestions, etc. A million thanks to @GuildDarts for creating such a fantastic and useful plugin. Thanks, H.
  26. 2 points
    I posted this on the serverbuilds.net forums, and noticed that several users here were interested, so cross-posting! This a somewhat complex yet in-demand installation, so I figured I'd share my steps in getting a Riot.im chat server syndicated through a Matrix bridge that supports a Jitsi voip/video conference bridge. The end result is a self-hosted discord-like chat server where any chat room can become a video conference with a single click! It has some other neat features like end-to-end encryption and syndication with other matrix server AND other types of chat servers (you can have a chat room that links to a discord room, irc channel, etc). We'll do almost all of this using apps from the Unraid Community Applications repo! Summary: We'll setup some domains for each of our components, then use a LetsEncrypt proxy to generate certificates. Matrix will run the back-end, Riot Chat will run the front-end, and Jitsi will handle the A/V. DNS Setup: You're gonna want a few subdomains, even if you have a dyndns setup pointing to your host. Then can all point to the same IP, or you can use CNAME or ALIAS records to point to the root domain. A DNS setup for somedomain.gg might look like this: Type - Host - Value A - @ - 1.2.3.4 (Your WAN IP) CNAME - bridge - somedomain.gg CNAME - chat - somedomain.gg CNAME - meet - somedomain.gg In the above-the `@` A-record will set the IP for your domain root, and the CNAME-records will cause the 3 subdomains to resolved to whatever domain name you point them at (the root domain, this this case). Each domain will host the following: bridge: matrix - The core communications protocol chat: riot - The chat web UI meet: jitsi - The video conferencing bridge Firewall Setup: You'll need the following ports forwarded from you WAN to you Unraid server: LetsEncrypt: WAN TCP 80 -> LAN 180 , WAN TCP 443 -> LAN 1443, WAN TCP 8448 -> LAN 1443, all on your Unraid server IP - 80: Used by LetsEncrypt to validate your certificate signing request -- this can be disabled after setup, then only enabled when you need to renew a certificate. - 443: LetsEncrypt proxy for encrypted web, duh - 8448: Matrix Integrations port for enabling plugins. Also proxied via LetsEncrypt. Make sure this points to 1443, not 8443! STUN: TCP and UDP 3478 on WAN -> 3478 on Unraid (or changed to suit your needs) Jitsi: UDP Port 10000 -> 10000 on Unraid We'll be assuming you used these ports in the rest of the guide, so if you needed to change any, compensate as needed! Docker Networking: This is a fairly complex configuration that will use at least 7 docker containers. To make this easier we'll create a custom docker network that these containers will all live on, so that they can communicate between each other without having to worry about exposing unnecessary ports to your LAN: 1. In Unraid, go to Settings->Docker. 2. Disable docker so you can make changes: set `Enable Docker` to `No` 3. Set `Preserve user defined networks` to `Yes` 4. Re-enable Docker 5. Open the Unraid console or SSH in. 6. Create a new Docker network by executing `docker network --subnet 172.20.0.0/24 create sslproxy` or whatever subnet works for you (adjusted below as needed). We're now done with the pre-install stuff! I'd suggest testing your DNS and that the ports are all open on your FW and are getting directed to the right places. If everything looks good, then lets get some dockers! LetsEncrypt Install: Before proceeding, wait for your DNS server to update and make sure you can resolve the 3 subdomains remotely. This is REQUIRED for LetsEncrypt to validate the domains! LetsEncrypt will need listen on port 80 and port 443 of your WAN (public-facing) interface so that it can validate your ownership of the domains. We're going to use a Docker from the Unraid Community Applications docker. But before we do, we need to enabled user defined networks in our Docker settings. But first, 1. In Community Applications, search for `LetsEncrypt` and install the container from `linuxserver` 2. Set the `Network Type` to `Custom: ssl proxy` 3. Set the `Fixed IP address` to `172.20.0.10` (or whatever works for you) 4. Make sure `Privileged` is set to `On` 5. Set the `http` port to `180` and the `https` port to `1443` 6. Supply an email 7. Enter your domain name, ie `somedomain.gg` 8. Enter your subdomains: `chat,bridge,meet` (and any others you want to encrypt) 9. Optional: set `Only Subdomains` to false if you want the root domain to also have a cert! The rest of the options should be fine as-is. If you do NOT have a domain, but use a dynamic dns service, you can still mange but might be limited to a single domain. Make sure `Only Subdomains` is set to `True`, otherwise your install will fail as LetsEncrypt will expect you have be running on your dyndns services web server! The following steps will also require you to do some nginx subdirectory redirection instead of domain proxying. SpaceInvader has a great video that demonstrates this in detail. Once you've created the docker instance, review the log. It might take a minute or two to generate the certificates. Let it finished and make sure there are no errors. It should say `Server ready` at the end if all goes well! Try browsing to your newly encrypted page via https://somedomain.gg (your domain) and make sure all looks right. You should see a letsencrypt landing page for now. If all went well, your LetsEncrypt certificates and proxy configuration files should be available in /mnt/user/appdata/letsencrypt/ LetsEncrypt Proxy Configuration: LetsEncrypt listens on ports 80 and 443, but we also need it to listen on port 8448 in order for Riot integrations via the public integration server to work property. Integrations let your hosted chatrooms include bots, helper commands (!gif etc), and linking to other chat services (irc, discord, etc). This is optional! If you're happy with vanilla Riot, you can skip this. Also, you can run your own private Integrations server, but I'm not getting into that here. So assuming you want to use the provided integrations, we need to get nginx listening on port 8448. To do that, edit `/mnt/user/appdata/letsencrypt/nginx/site-confs/default` and make the following change: Original: New: Next, we are going to need 3 proxy configurations inside LetsEncrypt's nginx server (one for matrix, riot and jitsi). These live in `/mnt/user/appdata/letsencrypt/mnt/user/appdata/letsencrypt/`. Create the following file: matrix.subdomain.conf: riot-web.subdomain.conf: jitsi.subdomain.conf: ^^^ NOTE!!! Make sure you saw the `CHANGE THIS` part of the `$upstream_app` setting. This should be the LAN IP of your Unraid server! Done! To test, trying visiting https://<subdomain>.somedomain.gg/ and you should bet a generic gateway error message. This means that the proxy files attempted to route you to their target services, which don't yet exist. If you got the standard LetsEncrypt landing page, then something is wrong! Matrix A Matrix container is available from avhost in Community Applications. 1. In Community Applications, search for `Matrix` and install the container from `avhost` 2. Set the `Network Type` to `Custom: ssl proxy` 3. Set the `Fixed IP address` to `172.20.0.30` or whatever works for you 4. Set the `Server Name` to `bridge.somedomain.gg` (your domain) 5. The rest of the settings should be fine, and I suggest not changing the ports if you can get away with it. Create the container and run it. Now we need to edit our Matrix config. 1. Edit `/mnt/user/appdata/matrix/homeserver.yaml` 2. Change `server_name: "bridge.somedomain.gg"` 3. Change `public_baseurl: https://bridge.somedomain.gg/"` 4. Under `listeners:` and `- port: 8008` change `bind_address: ['0.0.0.0']` 5. Change `enable_registration: true` 6. Change `registration_shared_secret: xxxx` to some random value. It doesn't matter what it is, just don't use the one from the default config! 7. Change `turn_uris` to point to your domain, ie `"turn:bridge.somedomain.gg:3478?transport=udp"` 8. Set a good long random value for `turn_shared_secret` If you have errors at start-up about your turnserver.pid file or database, you can try editing your /mnt/user/appdata/matrix/turnserver.conf file and adding: pidfile=/data/turnserver.pid userdb=/data/turnserver.db There are a ton of other settings you can play with, but I'd wait until after it working to get too fancy! Now restart the Matrix container, and check that https://bridge.somedomain.gg/ now shows the Matrix landing page. If not, something's wrong! Riot Chat Riot Chat servers as we web front-end chat interface. There's also a great mobile app called RiotIM. For the web interface, there's an Community Applications image for that! 1. Before we start, we need to manually create the config path and pull in the default config. So open a console/SSH to your server. 2. Create the config path by executing `mkdir -p /mnt/user/appdata/riot-web/config` 3. Download the default config by executing `wget -O /mnt/user/appdata/riot-web/config/config.json https://raw.githubusercontent.com/vector-im/riot-web/develop/config.sample.json` (**NOTE**: This is a different URL than the one suggested in the Docker!) 4. In Community Applications, search for `riot web` and install the container from `vectorim`. Watch you, there are two -- use the one with the fancy icon, which doesn't end with an asterisk (`*`)! 5. Set the `Network Type` to `Custom: ssl proxy` 6. Set the `Fixed IP address` to `172.20.0.20` (or whatever) 7. The rest of the settings should be fine. Create the container and run it. Now lets edit our Riot config. It's a JSON file, so make sure you respect JSON syntax 1. Edit ` /mnt/user/appdata/riot-web/config/config.json` 2. Change `"base_url": "https://bridge.somedomain.gg",` 3. Change `"server_name": "somedomain.gg",` 4. Under the `"Jitsi:"` subsection near the bottom, change `"preferredDomain": "meet.somedomain.gg"` If all went well, you should see the Riot interface at http://chat.somedomain.gg! If not, figure out why... Now lets create our first account! 1. From the welcome page, click `Create Account` 2. If the prior config was correct, `Advanced` should already be selected and it should say something like `Create your Matrix account on somedomain.gg`. If the `Free` option is set, then your RiotChat web client is using the public matrix.org service instead of your private instance! Make sure your `base_url` setting in your config.json is correct. Or just click Advanced, and enter `https://bridge.somedomain.gg` in the `Other Servers: Enter your custom homeserver URL` box. 3. Set your username and password 4. Setup encryption by following the prompts (or skip if you don't care). This may require that you whitelist any browser script blockers that you have running. Done! You now have a privately hosted Discord-alternative! Lets add some voice and video chat so we can stop using Zoom 😛 Jitsi This part doesn't have a solid Docker image in the Community Application store, so there's a few more steps involved. We're gonna need to clone their docker setup, which uses docker-compose. 1. Open a console/SSH to your server 2. Install docker-compose by executing `curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose` 3. Make it executable: `chmod u+x /usr/local/bin/docker-compose` 4. Move to your appdata folder : `cd /mnt/user/appdata` 5. Make and enter a folder for you docker-compose projctes: `mkdir docker-compose; cd docker-compose` 6. Clone and enter the `docker-jitsi-meet` repo: `git clone https://github.com/jitsi/docker-jitsi-meet ; cd docker-jitsi-meet` 7. Create an install environment: `cp env.example .env` 8. Populate some random secrets in your environment: `./gen-passwords.sh` 9. Edit the install environment (I'm using nano, but edit however you want): nano .env 10. Change `CONFIG=/mnt//mnt/user/appdata/jitsi-meet/` 11. Set TZ to your timezome, ie `TZ=America/Denver` 12. Change `PUBLIC_URL=https://meet.somedomain.gg` 13. Change `DOCKER_HOST_ADDRESS=192.168.0.1` or whatever the LAN address of your Unraid server is 14. Create the CONFIG path that you defined in step 10: `mkdir /mnt//mnt/user/appdata/jitsi-meet/` 15. Create and start the containers: `docker-compose -p jitsi-meet -f docker-compose.yml -f etherpad.yml up -d` 16. This will create 4 Jitsi containers are part of a Docker Stack -- see your list of dockers. You can't edit them, but take note of the `jitsi-meet_web_1` ports, which should be `8000` and `8443`. If you got any errors, it's likely a port conflict somewhere, so find the corresponding setting in your `.env` file and adjust as needed, reflecting any relevant changes in the next step. When we were setting up our Nginx proxy configs, you'll recall that the Jitsi config `$upstream_app` had to be set manually, rather than relying on the internal DNS. That's because the docker-compose stack names are not 100% predicatble, so it's better to just hard-code it. You might want to double-check that setting if you have in uses from here on. To test Jitsi, go to https://meet.somedomain.gg/ and hopfully you see the Jitsi page. Try to create a meeting. In the future, it may be wise to enable Authentication on your Jitsi server if you dont want any random person to be able to host conferences on your sever! See the docs (or SpaceInvader's video) for details on that. Now find a friend and get them to register a Riot account on your server at https://chat.somedomain.gg (or use the mobile app and connect to the custom host). Get in a chat room together, then click the Video icon next to the text input box and make sure it works. It's worth noting that Jitsi works differently when there are only 2 people chatting -- they'll communicate directly. With 3 or more, they'll communicate with the Jitsi server and use the TURN service. So it's a good idea to try to get a 3rd person to join as well, just to test out everything. Thats it, hope this helps! Enjoy! To Do: * Custom Integrations Server * Etherpad Integration Edit: While I was making this guide, SpaceInvader came out with a great video covering the Jitsi part! It covers some authentication options that I didn't get into, but would highly suggest. Check it out!
  27. 2 points
    The developer of the container (linuxserver) dropped all support for it and deleted the container altogether and advises everyone to switch to FreshRSS
  28. 2 points
    I do not think that article means what you think it means. AFAIK, minecraft doesn't talk a language that nginx understands, it needs to have a direct server client connection, which means a unique port for each server on the same IP. https://www.reliablesite.net/hosting-news/multiple-minecraft-servers-1-ip/ I know you didn't mention 2 servers, but the article you quoted did. For one server, you just need to forward TCP/UDP 25565 to your server's IP, and clients on the WAN can just put your domain name in.
  29. 2 points
    For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt. A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing. I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/. Create a share called libvirt on the cache drive just like for the docker instructions. edit rc.libvirt 's start_libvirtd method as follows: start_libvirtd() { if [ -f $LIBVIRTD_PIDFILE ];then echo "libvirt is already running..." exit 1 fi if mountpoint /etc/libvirt &> /dev/null ; then echo "Image is mounted, will attempt to unmount it next." umount /etc/libvirt 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Image still mounted at /etc/libvirt, cancelling cause this needs to be a symlink!" exit 1 else echo "Image unmounted succesfully." fi fi # In order to have a soft link created, we need to remove the /etc/libvirt directory or creating a soft link will fail if [[ -d /etc/libvirt ]]; then echo "libvirt directory still exists, removing it so we can use it for the soft link." rm -rf /etc/libvirt if [[ -d /etc/libvirt ]]; then echo "/etc/libvirt still exists! Creating a soft link will fail thus refusing to start libvirt." exit 1 else echo "Removed /etc/libvirt. Moving on." fi fi # Now that we know that the libvirt image isn't mounted, we want to make sure the symlink is active if [[ -L /etc/libvirt && -d /etc/libvirt ]]; then echo "/etc/libvirt is a soft link, libvirt is allowed to start" else echo "/etc/libvirt is not a soft link, will try to create it." ln -s /mnt/cache/libvirt /etc/ 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Soft link could not be created, refusing to start libvirt!" exit 1 else echo "Soft link created." fi fi # convert libvirt 1.3.1 w/ eric's hyperv vendor id patch to how libvirt does it in libvirt 1.3.3+ sed -i -e "s/<vendor id='none'\/>/<vendor_id state='on' value='none'\/>/g" /etc/libvirt/qemu/*.xml &> /dev/null # remove <locked/> from xml because libvirt + virlogd + virlockd has an issue with locked sed -i -e "s/<locked\/>//g" /etc/libvirt/qemu/*.xml &> /dev/null # copy any new conf files we dont currently have cp -n /etc/libvirt-/*.conf /etc/libvirt &> /dev/null # add missing tss user account if coming from an older version of unRAID if ! grep -q "^tss:" /etc/passwd ; then useradd -r -c "Account used by the trousers package to sandbox the tcsd daemon" -d / -u 59 -g tss -s /bin/false tss fi echo "Starting libvirtd..." mkdir -p $(dirname $LIBVIRTD_PIDFILE) check_processor /sbin/modprobe -a $MODULE $MODULES /usr/sbin/libvirtd -d -l $LIBVIRTD_OPTS } Add this code the the go file in addition to the code for the docker workaround: # Put the modified libvirt service file over the original one to make it not use the libvirt.img cp /boot/config/service-mods/libvirt-service-mod/rc.libvirt /etc/rc.d/rc.libvirt chmod +x /etc/rc.d/rc.libvirt
  30. 2 points
    I think i made a mistake as my previous post was deleted. Anyway i was writing i'm concerned too. I'm trying to find a workaround as unraid starts to throw alerts on both my ssds. The 187 Reported uncorrect attibute is growing. I'm really pissed off with this situation as i was planning to replace my procurve switch with a unifi one but now i have to buy ssd as if it was ink cartridges for my printer. 👹 Docker service is stopped, i have only 2 vm running and i still have 6MB/S writes on ssd. As unraid (or me probably ) is not doing things right all the time i take diagnostics from time to time. So i searched in history the starting point of the problem. Attached is a spreadsheet with smartdata of one of the ssd. Seeing this it seems pretty obvious things started going crazy with the 6.8.0.
  31. 2 points
  32. 2 points
    The animations need to be subtle. I don’t want my Docker tab to feel something like this: https://www.cameronsworld.net/ 😁
  33. 2 points
    Moin, ich würde auch gerne mithelfen. Kann aber auch meist nur abends ran. Auf github in der main.txt ist übrigens ein lustiger Fehler: "Array Operation=Array Aufgraben"... Gesendet von meinem ONEPLUS A6003 mit Tapatalk
  34. 2 points
    In my case it's to give my Linux vm access to my unraid storage for steam.
  35. 2 points
    thank you to unraid team, I have been enjoying my experience with it and have already decided to purchase it.
  36. 2 points
    Sweet. It's working great for me now. Thank you sir. By the way, I have joined hernandito in making folder icons for people if anyone wants them:
  37. 2 points
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  38. 2 points
    Is anybody using docker compose? Are there any plans to integrate it with unRAID?
  39. 2 points
    Should be soon(™), lots of work to get multiple pools working as problem free and safely as possibly from the get go, likely much more than anticipated.
  40. 2 points
    That’s incorrect. All connected drives, other than the USB boot drive, count towards the license count. This limit is checked when the data array is started. However once the array is started additional drives can be added to be used as unassigned drives. If the system is rebooted, these additional drives will count towards the limit so may need to be temporarily removed in order to get below the limit in order to start the data array
  41. 2 points
    First of lets me just say thanks for the amazing visuals and suggestions About the expansion chevron never noticed as i just leave mine in advanced view, but yeah thats alot of wasted space I also love the idea of having a preview to the side, think i will add both with and without icons and give the user the option to decide Thanks again for the great suggestions
  42. 2 points
    Not spining down won't cause any damage, on the contrary, most believe it's better for the devices to always be spun up, just not so good for noise/power.
  43. 2 points
    I would also like to see GPU drivers supported. Preferably making it optional to minimise impact to users who dont need it.
  44. 2 points
    I believe this issue is much more wide spread than it appears - I found this on the unraid subreddit and decided to poke around my server. Currently loop2 is writing over 2gb in under 10 minutes to my unencrypted BTRFS cache pool. Unraid: 6.8.3 Added a new samsung 860 1tb ssd to my btrfs pool 4 months ago: 22.01 TB (47269069408) 3383 (4m, 18d, 23h) I'd rather not have to run XFS and or modify unraid beyond what is supported. Hopefully we can get a official update on this and or a fix soon as this is causing excessive writes to my ssds - thus reducing their life and possibly causing unforeseen damage. Referenced subreddit post: https://www.reddit.com/r/unRAID/comments/ggbvgv/unraid_is_unusable_for_me_because_of_the_docker/
  45. 2 points
    @binhex please continue and disregard bad comments, your dockers are very helpful to lots of us (including me). We have a way to stay behind a VPN for torrents on docker because you provide such docker! Thank you for you effort ! Envoyé de mon iPad en utilisant Tapatalk
  46. 2 points
    ? - i can only assume this abuse was targeted at me, thanks for that!. for your information i do a LOT for this community, spending many hours supporting users every day, your comment makes me reconsider whether i should bother!. your post regards that warning is visible in everyone's log, it is as i said of no consequence and i wanted to make that clear before i had other people also mentioning they had 'the same issue' with the same message in their log, this can lead to support frustration as any issues are very rarely 'the same' even if the symptom is (cannot access the web ui).
  47. 2 points
    Man, I feel for you but that's why they have a 30-day trial and this forum. You have an entire month to try things out before you even need to think about purchasing anything (you can even request a longer trial), and you can ask almost anything here on the forum and get a quick answer. The Unraid team puts everything out there for you and gives you the resources to figure out if it's the right OS for you. I'm sorry you rushed into it without doing your homework first.
  48. 2 points
    Since this is the first thread that comes up on google and isn't very detailed, I just wanted to link the guide I just wrote. It shows you how to create a docker container, add it to your own private docker registry (or you can use dockerhub), and then add it to the private apps section of Community Applications.
  49. 2 points
    Just want to throw this out there, one of the main reasons some of us choose to mess with compose is to get around some of the limitation of the unRAID template system. In particular when it comes to complex multi-container applications, which often use several frontend and backend networks.
  50. 2 points
    Go to Main -> Boot Device -> Flash -> Syslinux Configuration. When you select "advanced view" you can simply select the "safe mode" option. Apply and reboot.