Leaderboard

Popular Content

Showing content with the highest reputation since 12/18/21 in all areas

  1. Application Name: Steam (Headless) Application Site: https://store.steampowered.com/ Docker Hub: https://hub.docker.com/r/josh5/steam-headless/ Github: https://github.com/Josh5/docker-steam-headless/ Discord: https://unmanic.app/discord (Not just for Unmanic...) Description: Play your games in the browser with audio. Connect another device and use it with Steam Remote Play. Features: NVIDIA GPU support Full video/audio noVNC web access to a Desktop Root access SSH server for remote terminal access Notes: ADDITIONAL SOFTWARE: If you wish to install additional applications, you can generate a script inside the "~/init.d" directory ending with ".sh". This will be executed on the container startup. STORAGE PATHS: Everything that you wish to save in this container should be stored in the home directory or a docker container mount that you have specified. All files that are store outside your home directory are not persistent and will be wiped if there is an update of the container or you change something in the template. GAMES LIBRARY: It is recommended that you mount your games library to `/games` and configure Steam to add that path. AUTO START APPLICATIONS: In this container, Steam is configured to automatically start. If you wish to add additional services to automatically start, add them under Applications > Settings > Session and Startup in the WebUI. NETWORK MODE: If you want to use the container as a Steam Remote Play (previously "In Home Streaming") host device you should set the Network Type: to "host". This is a requirement for controller hardware to work and to prevent traffic being routed through the internet since Steam thinks you are on a different network. Setup Guide: CONTAINER TEMPLATE: Navigate to "APPS" tab. Search for "steam-headless" Select either Install or Actions > Install from the search result. Configure the template as required. GPU CONFIGURATION: This container can use your GPU. In order for it to do this you need to have the NVIDIA plugin installed Note: Currently the container only supports NVIDIA GPUs. AMD and Intel will follow shortly... Install the Unraid Nvidia-Driver Plugin. This will maintain an up-to-date NVIDIA driver installation on your Unraid server. Toggle the steam-headless Docker Container template editor to "Advanced View". In the "Extra Parameters" field, ensure that you have the "--runtime=nvidia" parameter added. (Optional - This step is only necessary if you only have a single GPU, then leaving this as "all" is fine.) Expand the Show more settings... section near the bottom of the template. In the Nvidia GPU UUID: (NVIDIA_VISIBLE_DEVICES) variable, copy your GPU UUID (can be found in the Unraid Nvidia Plugin. See that forum thread for details). ADDING CONTROLLER SUPPORT: Unraid's Linux kernel by default does not have the modules required to support controller input. Steam requires these modules to be able to create the virtual "Steam Input Gamepad Emulation" device that it can then map buttons to. @ich777 Has kindly offered to build and maintain the required modules for the Unraid kernel as he already has a CI/CD pipeline in place and a small number of other kernel modules that he is maintaining for other projects. So a big thanks to him for that! Install the uinput plugin from the Apps tab. The container will not be able to receive kernel events from the host unless the Network Type: is set to "host". Ensure that you container is configured this way. WARNING: Be aware that this container requires at least 8083, 32123, and 2222 available for the WebUI, Web Audio, and SSH to work. It will also require any ports that Steam requires for Steam Remote Play No server restart is required, however. Ensure that the steam-headless Docker container is restarted after installing the uinput plugin for it to be able to detect the newly added module.
    7 points
  2. I am in the process of remaking Macinabox & adding some new features and hope to have it finished by next weekend. I am sorry for the lack of updates recently on this container. Thankyou @ghost82 for all you have done in answering questions here and on github and sorry i havent reached out to you before.
    7 points
  3. Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. What can you do with WireGuard? Let's walk through each of the connection types: Remote access to server: Use your phone or computer to remotely access your Unraid server, including: Unraid administration via the webgui Access dockers, VMs, and network shares as though you were physically connected to the network Remote access to LAN: Builds on "Remote access to server", allowing you to access your entire LAN as well. Server to server access: Allows two Unraid servers to connect to each other. LAN to LAN access: Builds on "Server to server access", allowing two entire networks to communicate. (see this guide) Server hub & spoke access: Builds on "Remote access to server", except that all of the VPN clients can connect to each other as well. Note that all traffic passes through the server. LAN hub & spoke access: Builds on "Server hub & spoke access", allowing you to access your entire LAN as well. VPN tunneled access: Route traffic for specific Dockers and VMs through a commercial WireGuard VPN provider (see this guide) Remote tunneled access: Securely access the Internet from untrusted networks by routing all of your traffic through the VPN and out Unraid's Internet connection In this guide we will walk through how to setup WireGuard so that your trusted devices can VPN into your home network to access Unraid and the other systems on your network. Prerequisites You must be running Unraid 6.8+ with the Dynamix WireGuard plugin from Community Apps Understand that giving someone VPN access to your LAN is just like giving them physical access to your LAN, except they have it 24x7 when you aren't around to supervise. Only give access to people and devices that you trust, and make certain that the configuration details (particularly the private keys) are not passed around insecurely. Regardless of the "connection type" you choose, assume that anyone who gets access to this configuration information will be able to get full access to your network. This guide works great for simple networks. But if you have Dockers with custom IPs or VMs with strict networking requirements, please see the "Complex Networks" section below. Unraid will automatically configure your WireGuard clients to connect to Unraid using your current public IP address, which will work until that IP address changes. To future-proof the setup, you can use Dynamic DNS instead. There are many ways to do this, probably the easiest is described in this 2 minute video from SpaceInvaderOne If your router has UPnP enabled, Unraid will be able to automatically forward the port for you. If not, you will need to know how to configure your router to forward a port. You will need to install WireGuard on a client system. It is available for many operating systems: https://www.wireguard.com/install/ Android or iOS make good first systems, because you can get all the details via QR code. Setting up the Unraid side of the VPN tunnel First, go to Settings -> Network Settings -> Interface eth0. If "Enable bridging" is "Yes", then WireGuard will work as described below. If bridging is disabled, then none of the "Peer type of connections" that involve the local LAN will work properly. As a general rule, bridging should be enabled in Unraid. If UPnP is enabled on your router and you want to use it in Unraid, go to Settings -> Management Access and confirm "Use UPnP" is set to Yes On Unraid 6.8, go to Settings -> VPN Manager Give the VPN Tunnel a name, such as "MyHome VPN" Press "Generate Keypair". This will generate a set of public and private keys for Unraid. Take care not to inadvertently share the private key with anyone (such as in a screenshot like this) By default the local endpoint will be configured with your current public IP address. If you chose to setup DDNS earlier, change the IP address to the DDNS address. Unraid will recommend a port to use. You typically won't need to change this unless you already have WireGuard running elsewhere on your network. Hit Apply If Unraid detects that your router supports UPnP, it will automatically setup port forwarding for you: If you see a note that says "configure your router for port forwarding..." you will need to login to your router and setup the port forward as directed by the note: Some tips for setting up the port forward in your router: Both the external (source) and internal (target/local) ports should be the set to the value Unraid provides. If your router interface asks you to put in a range, use the same port for both the starting and ending values. Be sure to specify that it is a UDP port and not a TCP port. For the internal (target/local) address, use the IP address of your Unraid system shown in the note. Google can help you find instructions for your specific router, i.e. "how to port forward Asus RT-AC68U" Note that after hitting Apply, the public and private keys are removed from view. If you ever need to access them, click the "key" icon on the right hand side. Similarly, you can access other advanced setting by pressing the "down chevron" on the right hand side. They are beyond the scope of this guide, but you can turn on help to see what they do. In the upper right corner of the page, change the Inactive slider to Active to start WireGuard. You can optionally set the tunnel to Autostart when Unraid boots. Defining a Peer (client) Click "Add Peer" Give it a name, such as "MyAndroid" For the initial connection type, choose "Remote access to LAN". This will give your device access to Unraid and other items on your network (there are some caveats to this covered below) Click "Generate Keypair" to generate public and private keys for the client. The private key will be given to the client / peer, but take care not to share it with anyone else (such as in a screenshot like this) For an additional layer of security, click "Generate Key" to generate a preshared key. Again, this should only be shared with this client / peer. Click Apply. Note: Technically, the peer should generate these keys and not give the private key to Unraid. You are welcome to do that, but it is less convenient as the config files Unraid generates will not be complete and you will have to finish configuring the client manually. Configuring a Peer (client) Click the "eye" icon to view the peer configuration. If the button is not clickable, you need to apply or reset your unsaved changes first. If you are setting up a mobile device, choose the "Create from QR code" option in the mobile app and take a picture of the QR code. Give it a name and make the connection. The VPN tunnel starts almost instantaneously, once it is up you can open a browser and connect to Unraid or another system on your network. Be careful not to share screenshots of the QR code with anyone, or they will be able to use it to access your VPN. If you are setting up another type of device, download the file and transfer it to the remote computer via trusted email or dropbox, etc. Then unzip it and load the configuration into the client. Protect this file, anyone who has access to it will be able to access your VPN. Complex Networks The instructions above should work out of the box for simple networks. With "Use NAT" defaulted to Yes, all network traffic on Unraid uses Unraid's IP, and that works fine if you have a simple setup. However, if you have Dockers with custom IPs or VMs with strict networking requirements, you'll need to make a few changes: In the WireGuard tunnel config, set "Use NAT" to No In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of 10.253.0.0/24 you should add this static route: Network: 10.253.0.0/24 (aka 10.253.0.0 with subnet 255.255.255.0) Gateway: <IP address of your Unraid system> If you use pfSense, you may also need to check the box for "Static route filtering - bypass firewall rules for traffic on the same interface". See this. If you have Dockers with custom IPs then on the Docker settings page, set "Host access to custom networks" to "Enabled". see this: https://forums.unraid.net/topic/84229-dynamix-wireguard-vpn/page/8/?tab=comments#comment-808801 There are some configurations you'll want to avoid, here is how a few key settings interact: With "Use NAT" = Yes and "Host access to custom networks" = disabled (static route optional) server and dockers on bridge/host - accessible! VMs and other systems on LAN - accessible! dockers with custom IP - NOT accessible (this is the "simple network" setup assumed by the guide above) With "Use NAT" = Yes and "Host access to custom networks" = enabled (static route optional) server and dockers on bridge/host - accessible! VMs and other systems on LAN - NOT accessible dockers with custom IP - NOT accessible (avoid this config) With "Use NAT" = No and no static route server and dockers on bridge/host - accessible! VMs and other systems on LAN - NOT accessible dockers with custom IP - NOT accessible (avoid this, if "Use NAT" = No, you really need to add a static route in your router) With "Use NAT" = No and "Host access to custom networks" = disabled and static route server and dockers on bridge/host - accessible! VMs and other systems on LAN - accessible! dockers with custom IP - NOT accessible (You've come this far, just set "Host access to custom networks" to enabled you're set) With "Use NAT" = No and "Host access to custom networks" = enabled and static route server and dockers on bridge/host - accessible! VMs and other systems on LAN - accessible! dockers with custom IP - accessible! (woohoo! the recommended setup for complex networks) About DNS Everything discussed so far should work if you access the devices by IP address or with a Fully Qualified Domain Name such as yourpersonalhash.unraid.net. Short names such as "tower" probably won't work, nor any DNS entries managed by the router. To get those to work over the tunnel, return to the VPN Manager page in Unraid, switch from Basic to Advanced mode, and add the IP address of your desired DNS server into the "Peer DNS Server" field (don't forget to put the updated config file on the client after saving it!) You may want to use the IP address of the router on the LAN you are connecting to, or you could use a globally available IP like 8.8.8.8 ** "WireGuard" and the "WireGuard" logo are registered trademarks of Jason A. Donenfeld.
    4 points
  4. Heimdall stop working with Nginx Reverse Proxy. In LAN is everything okay. Looks like CSS ist broken. *edit* i open the console from the docker /var/www/localhost/heimdall/ open wie vi ".env" and change the host from http://localhost to http://SUBDOMAIN.DOMAIN - reboot the docker.
    4 points
  5. I have put my initial Beta release into CA now. Please post questions in this feature request.
    4 points
  6. Feature Request: @Squid Now that we have multi Cache Pools, I have some Dockers App Data stored on other Cache Drives, I have 3 in total now. Can we get support for Multi Cache Pools where Docker App Data is stored on other Cache Drives. Example: I have all my normal AppData stored on my Cache Drive, But then I also have a Plex_AppData stored on a seperate cache drive, and then Nextcloud stored on another seperate Cache Drive as NextCloud_AppData. Then is there a way that when backing up dockers AppData, that only that docker (Or linked dockers, Example we link Nextcould to the database docker) are stopped and then started backup once they are backed up and then start the next docker, so only the docker that is currently being backed up is being stopped, others continue to run until its there time for backing up. Then also make it so we can restore one docker(linked dockers), instead of all or nothing. And if were stopping and starting dockers like I mentioned, then the individual or if linked together are backed into individual backups. by name_date_time. Since some of use have so many dockers that one backup could be close to 1TB and if we need only one app restored it takes forever, and appdata is only growing. This would allow our dockers to only be down for a short time, just enought for it to be back up and running, and then restore individual backups. Also I would like the ability to back up sets of dockers in schedules, Example: Daily Backups Mark dockers for daily backups. Weekly Backups Mark dockers for weekly backups. Monthly Backups Mark dockers for weekly backups. and an option for backup up marked now! As I don't need all my dockers to backup on the same schedule. A few I need backed up on a daily basis, then most others done weekly, and a few only monthly. My backups can take anywhere from 2-3 hours at times to back up and thats a long time for them all to be down. I know its asking a lot but I feel it would be a huge benefit to the community.
    4 points
  7. Ja, das ist das Log, was @VK28.01 am Anfang zitiert hat. Da läuft ganz schnell immer wieder der gleiche Text durch. Offensichtlich so schnell, dass es mir fast die Unraid WebUI einfriert und ich Schwierigkeiten habe, den Docker zu stoppen, weil ich kaum noch in das Kontextmenü komme. Das reagiert aber zum Glück doch noch nach einer Weile. Ich hab's... Das Template hat die Variable RUN_OPTS und die hat wiederum den Standard-Wert "run options here", wie man hier noch sehen kann: Der Text war durch das Template so voreingetragen, nehme ich an, was einen korrekten Start verhindert hat. 😄 Deswegen hatte es dann auch geklappt, als ich da etwas funktionierendes eingetragen habe. Ich habe den EIntrag aus meiner Config ganz gestrichen und jetzt läuft's.
    3 points
  8. all the users, including me, like you! thank you to you!
    3 points
  9. Waking up after the omnicron wave, @SpencerJ let me know of this prophecy 🙂 "00110110 00101110 00110001 00110000 00101110 00110000 00101101 01110010 01100011 00110011 01100100"
    3 points
  10. That's it! I thought that I only needed to change the port, but you have to delete the whole line and then add it again. I assumed that deleting the entry meant that I had to delete the port that was already there and just put another port in there. Both servers now show up in LAN. Not online. I will try a few things and see if I can get it working. Most likely something wrong with the ports. I also can't connect to the modded server because it keeps telling me that it is a different version, but I had the same issues in the VM. I will ask for help again if I can't get it working.
    3 points
  11. Nothing, the touch command will create an empty file. Unraid checks on startup if a file is present or not. The content is irrelevant.
    3 points
  12. hi guys, this looks like a qbittorrent bug, please feel free to monitor or add anything useful to this support thread:- https://github.com/qbittorrent/qBittorrent/issues/15969 this may also be related:- https://github.com/qbittorrent/qBittorrent/issues/15965 and/or this:- https://github.com/alexbelgium/hassio-addons/issues/155 if you wish to roll back to get you going then follow these instructions, see Q5:- https://github.com/binhex/documentation/blob/master/docker/faq/unraid.md
    3 points
  13. I use this script, which checks if Plex has started transcoding and stops Trex if that is true. It starts Trex after Plex has finished transcoding. It is for Nvidia cards only. Feel free to use it and modify it to your needs. #!/bin/bash # Check if nvidia-smi daemon is running and start it if not. if [[ `ps ax | grep nvidia-smi | grep daemon` == "" ]]; then /usr/bin/nvidia-smi daemon fi sleep 300 # Wait until array is online and all dockers are started. Comment this out if you are testing the script. TREX=`docker container ls -q --filter name=trex*` while true; do if [[ `/usr/bin/nvidia-smi | grep Plex` == "" ]]; then # If Plex is not transcoding, start the trex-miner container if it is not running. if [[ `docker ps | grep $TREX` == "" ]]; then echo "No Plex, starting Trex." docker start $TREX fi else # If Plex is transcoding, stop the trex-miner container if it is running. if [[ `docker ps | grep $TREX` != "" ]]; then echo "Plex running, stopping Trex." docker stop $TREX fi fi sleep 1 done
    3 points
  14. Hallo leute, hier eine kleine Anleitung wie man Wireguard einrichtet Mein anbieter ist all-ink.com Router Fritzbox 7490 Endgeräte Android Geräte als erstes benötigen wir eine Dyndns, hierzu gibt es kostenlose anbieter oder kostenpflichtige. Strato nimmt für eine kostenpflichtige Dyndns ca 15 euro Pro Jahr (kein SSL) stand 2021 in meinem fall nutze ich eine Dyndns bei all inkl, da ich hier einige Domaisn schon angemeldet habe. vorteil, man kann hier egal bei welche Domain einen DYNDNS mit anbringen. - Als erstes müsst ihr bei all-ink in den Members bereich. - Hier unter Domains eine Domain bestellen (als BSP) 123.de Nachdem die Domain eingebunden ist müssen wir über die Technische verwaltung in den KAS Login welchsel Alternativ kann man als DNS Dienst auch Duckdns verwenden (gibt es bei Unraid als Plugin) dieser dienst ist Kostenlos, nachteil Ihr könnt nicht euche Wunsch Domain nehmen:) - Dort unter Tools - DDNS-Einstellungen - Hier müssen wir einen neuen DDNS Benutzer anlegen - Bei Host kann man jetzt alles frei verwenden was man möchte, als BSP nehmen wir sub.123.de "sub.123.de" ist jetzt unsere Sub Domain unter unserer Haupt Domain, die Domain greift weiterhin auf den webspace zu ! Dual-Stack (die ist für die Internet IPv4 & IPv6), die müssen wir aktivieren, alternativ ist der zugriff nur über IPv4 nur möglich ! Bei beschreibung könnt ihr eingeben was Ihr möchtet, ich gebe "Heimnetz" ein. Kennwort Generiere ich und Speicher es mir ab ! Nachdem alles eingegeben und bestätigt wurde, kommt diese Fenster Hier sind nun unserer Wichtigen Daten für unserer Fritzbox Host sub.123.de Update-URL mit IPv4 & V6 Benutzername Passwort wir Kopieren nun diese daten und Öffnen unsere Fritzbox. Bei der Fritzbox sollte die Erweiterte Ansicht eingestellt werden, sofern die Punkt DYNDNS nicht angezeigt wird. Nun die Daten wie angegeben eintragen, Bei Update URL ganz wichtig egal bei welchem anbieter ihr Seit müsst ihr das mit angeben (sofern das eucher anbieter anbietet) dyndns.kasserver.com/?myip=<ipaddr>&myip6=<ip6addr> hiervon ist sicher bereich wichtig "<ipaddr>&myip6=<ip6addr>" Damit ihr IPv4 & IPv6 nutzen könnt. soweit ist nun die DYNDNS eingebuchen, als nächstet muss Wireguard eingerichtet werden (der leichte teil!) Wir gehen nun wie folgt vor. Wir öffnen Unraid, ich gehe mal davon auch das die Community Aplication installiert sind und geben Wireguard ein. - Dort nehmen wir Dynamix Wireguard Einfach installieren, hier muss man nichts einstellen. Nun wechseln wir unter Unraid in die Settings VPN Manager und gehen auf "add tunnel" Hier muss das ganze auf aktiv stehe, autostart bitte auch aktivieren. Bei Local Name könnt Ihr eintragen was Ihr möchtet, in meinem fall gebe ich "Home" an. dann rechts auf den pfeil drücken, damit wir die erweiterten einstellungen sehen. Nun müsst Ihr bei Network Protokoll IPv4 und IPv6 angeben. Bei Local endpoint geben wir die Domain an "sub.123.de" : 51820 , diesen könnt ihr auch ändern, in unserem fall lassen wir es so. Es ist aber besser den Port zu ändern! als nächstes müsst ihr dann auf Add peer klicken um einen benutzer anzulegen. Benutzer könnt Ihr frei wählen, Handy, Tablet oder so. Jetzt ist nun wichtig was man über die vpn machen möchte. ich denke mal ein bild sagt mehr wie 1000 worte. In unserem fall möchten wir unseren kompletten Traffic über unseren Server laufen lassen. Wir geben nun bei Peer type of access: Remote Tunneled access an nun bestätigen wir das ganze. Als nächstes gehen wir nun auf die suche im Playstore nach Wireguard. Hier der link von google https://www.google.com/search?q=wireguard+play+store&client=tablet-android-samsung-nf-rev1&sourceid=chrome-mobile&ie=UTF-8 app Installieren Nun öffnet ihr Wireguard unten erscheint ein Plus symbol, dies einmal anklicken und QR Code Scanner anwählen In eurem VPN Manger, in welchem wir uns noch befinden klicken wir ganz rechts auf das Augen Symbol, hier erscheint ein QR Code, diesen Scannen wir einfach mit der App ab. Somit ist euer Client mit dem Unraid schonmal Theoretisch verbunden, als letzten Schritt müssen wir nochmals in die Fritzbox und den Port, welchen wir vorhin angegeben haben freigeben. Wir welchseln nun nochmals in die Einstellungen der Fritbox Hierzu gehen wir nun unter "Internet" ----> "Freigaben" ---> "Portfreigaben" Hier müssen wir nun unseren Server hinzufügen, da wir nicht möchten das unser Server Port freigibt, sondern wir das selber bestimmen möchte nehmen wir alle Haken raus. Nun weis unsere Fritzbox schon mal das wir einen Unraid Server haben. Als nächstes müssen wir noch den Port freigeben, dazu klicken wir unten auf "neue Freigabe" und es erscheint folgendes Popup Hier muss die Portfreigabe angewählt werden. Bezeichnung kann frei eingetragen werden Protokoll verwenden wir bzw Wireguard den UDP Port, Bei dem Port geben wir nun den gleichen Port wie aus unserem Unraid VPN Manager ein. Wir bestätigen das ganze und Starten nun unser Unraid System neu! Als nächstes müssen wir in unserem Handy (oder endgerät) den Schieberegler in Wireguard auf aktiv setzten. Sobald der Server neu gestartet ist (dies dauert wir bekannt ja immer eine zeit lang) sollte die VPN Verbindung stehen. In meinem fall läuft das ganze über Remote tunneled access, das heisst, ist mein Server mal aus oder nicht erreichbar, habe ich auch kein internet, da nun sämtlicher Traffic über die VPN läuft, alternativ kann ich einfach die VPN verbindung am handy mit den Schieberegler deaktivieren. Ob nun die VPN verbindung aktiv ist, steht ihr in eurem Unraid, unter Dashboard unten links, hier ein Bild von meinen verbindungen. sollte hier ein traffic endstehen müsst ihr alle schritte nochmals von anfang an Durchgehen, oder ggf. hier im Forum nachfragen Ich hoffe euch hat die anleitung gefallen und es ist verständlich erklärt ? Viel Spass euch nun mit eurer neuen VPN Verbindung!
    3 points
  15. Good day! Machinaris v0.6.8 is now available. Changes include: SHIBGreen - cross-farming support for this blockchain fork. Support for pooling configuration of forks like Chives. Updated blockchains: Chives, Stor, Stai(coin) Various fixes for issues reported in the Discord. Thanks all! Known Issues: - Staicoin broken due to their blockchain renaming. Run test stream for a working version. - Summary page for Chives - does not show Partial Proofs to Pools chart when pooling.
    3 points
  16. Mach einen neuen Port im Template der ca. so aussieht: Danach kannst du dich mit so ziemlich jedem beliebigen VNC Client, wie in diesem Beispiel, auf Port 5990 verbinden. Vergiss aber bitte nicht das der Host Port immer ein anderer sein muss und der Continer Port muss immer 5900 sein, nur so als Zusatzinfo ich verwende in diesem Beispiel 5990 und das hat genau den Grund weil auch unRAID für die VNC Verbindung zu den VMs Ports 5900 und so weiter verwendet und einen Konflikt zu vermeinden.
    3 points
  17. I just wanted to say thank you for the plugin. For the first time yesterday I was able to boot from efi using ipxe and then network boot with iscsi as the disk. The plugin is going to make experimenting with different things easy a breeze and always have a bootable os on my home network. My next step is to see if I can mount a compressed qcow2 image and network boot.
    3 points
  18. I have been using Unraid since 6.1.8, and I believe I've been experiencing a similar issue with 6.9.2 This year in June I upgraded my motherboard & CPU (built in 2015: Asrock H97 Pro4S, Xeon E3-1231v3 24GB 1600 ram, upgraded this year to: Asrock Z170S, i5-7600k, 16GB 3200 ram) and I had upgraded specifically for the Intel QuickSync hardware transcoding. On the Xeon hardware, the server was perfectly stable, but it was at its limit in terms of software transcoding, hence the upgrade. After the upgrade, I started noticing frequent crashes occurring more & more frequently -- at first once per month, then once per week, and now every few days. I thought I had successfully troubleshooted (troubleshot?) the issue when I noticed the NTP server was causing a kernel panic. I found this after I set up a syslog server on a raspberry pi I had lying around. Unfortunately, even after disabling NTP entirely, the crashes persisted, but now there was no trace of any issues in the syslog. My raspberry pi is also a pihole, so I am able to see when the DNS requests cease, and that the server crashes most frequently in the early hours of the morning, but beyond that I'm stumped. This morning, I found the threads posted by @Tristankin, and rather than downgrade back to 6.8.3, I decided to experiment by commenting out the # modprobe i915 line in my go file. I'm hoping that the server continues to function, and I think I'll be able to make do with software transcoding for the time being, but I would like very much for this issue to be resolved. Happy to provide any details as necessary.
    3 points
  19. Coming from this thread: I would really appreciate a simple GUI way to configure additional SAMBA/SMB options for my server. Specifically I'm interested in changing the following options to improve the security of the server: server min protocol = SMB3_11 client min protocol = SMB3_11 client ipc min protocol = SMB3_11 null passwords = No client signing = required client protection = encrypt server signing = mandatory server smb encrypt = required client ipc signing = required ntlm auth = ntlmv2-only null passwords = No Rather than using the SMB extra configuration field which I'm finding confusing and difficult to use. I would rather these options be available under 'SMB Settings' as drop-down options (for example, 'Enable NetBIOS' is currently listed there). I think that the out of the box defaults should remain as broadly compatible as possible but it should not be a difficult process to enable high security configurations on the server. Thanks,
    3 points
  20. Here is a good description of what the utility does: https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ Basically, it enables msi support for devices that support it, with registry changes. With msi disabled different devices can share the same irq, that's fine for some, there won't be any issue, but for some devices it becomes problematic; usually with gpu (audio and video), onboard audio and network. Usually you will find usb controllers sharing irq with something else, gpu for example. I was having issues especially with gpu, in my build shared irq existed for usb controller, infiniband network, gpu and onboard audio: connecting to remote desktop via infiniband with shared irq generated instability with connection drops; with the utility I was able to resolve most of them, unfortunately I still have an irq shared between usb controller and the infiniband network; if I enable msi for infiniband the device doesn't start (some error code, I don't remember), maybe some driver issues since that card is very old. So play with care with msi, especially for devices needed to boot the vm. You can't do that changes from the host, but only within the vm.
    3 points
  21. Go to the Plugins tab and check for updates. You'll want to make sure you are running the latest version of the My Servers plugin, which is currently 2021.09.15.1853. If you are still having issues, open a webterminal and type: unraid-api restart
    3 points
  22. EDIT: As of 2021-09-17 half a year of active use this solution seems stable. No stales and no problems encountered. The Issue and cause When SMB share is mounted to Linux you sometimes might encounter mount hanging and you get error stale file handle. Something in terms of: cannot access '/mnt/sharename': Stale file handle This is caused by file being stored into cache and then moved into another disk by the Mover. The file inode changes and client gets confused because of it. I suspect it could also happen when file moves from another disk inside array, but have not confirmed that and it would be fairly rare issue. Solutions gathered (only one required) Disabling the cache or mover are ways to solve this. However not very practical for many people as it takes away a feature. Disabling hardlinks also fix this problem from occuring. However it also disables hardlinks from whole system. Generally this isn't huge problem. Only certain apps require it and still kind of work with it disabled. Change SMB version to 1.0 at client side. This has some problems as well, such as exposing server to security problems coming with the 1.0 version. It also has some performance problems and lack of features compared to v3. Now for PROPER fix. Key is to add noserverino to mount flags. This forces client to generate its own inode codes rather than using server one. So far I have not noticed any problems by doing this. I can not replicate the issue with this method and I have hardlinks enabled (not required) and SMB v1 disabled (not required). How to replicate the issue Create a file to share that has cache enabled (not prefered). Browse into that directory inside the mount using /mnt/user path Enable the mover Witness the stale file handle error Ramblings Also I found many different variations for the full mount flags and here is what I use: //192.168.1.20/sharename /mnt/sharename cifs rw,_netdev,noserverino,uid=MYLOCALUSERNAME,gid=users,credentials=/etc/samba/credentials/MYLOCALUSERNAME,file_mode=0666,dir_mode=0777 0 0 Let's go thru the mount flags. I'm using these knowing that posix extensions are DISABLED. With them enabled, you might want to use different flags, especially about permissions. Feel free to change them as you like, the noserverino is the ONLY required one. rw - This is just in case and it enabled read/write access to mount. Might be default anyway. _netdev - Makes the system consider this as network device and it actually waits for networking to be enabled before mounting. noserverino - Client generated inodes (required for the fix) uid/gid - These are optional. It makes the mount appear as like its owned by certain uid. Inside the server nothing changes. I'm using these because files are owned by the nobody/users and I can't open files. There is also noperm flag etc. you could use. I just find uid and gid most practical. credentials - This is a file containing username and password for the share. This is just so that people can't see my password by reading /etc/fstab. For more reference how to set this up https://wiki.archlinux.org/index.php/samba#Storing_share_passwords file_mode/dir_mode - These are optional. These make files appear in share as 0666 and 0777 permissions, it does not actually change permissions at server side. Without these the file permissions are not "in sync" and appear wrong in the client side. Such as 0755 directory permissions while it is 0777 in server. Posix/Linux/Unix extensions (not related to stale file handle) Problem I have not been able to solve is how to enable Posix/Linux/Unix extensions. When I try to enable the extensions it errors out saying that server does not support them. Inside samba config in unraid there is unix extensions = No. However turning this Yes in many ways did not enable them. Why this matters? Well those extensions enable certain nice features that makes the share appear as proper linux drive. To confirm that unix extensions are not enabled: mount | grep "//" in the flags you see nounix flag. To enable unix extensions manually add unix to flags. However during mount you get an error and reading dmesg shows you that it reports server not supporting unix extensions. NFS For NFS I still have no real solution other than disabling hardlinks.
    3 points
  23. Plex (unless you've changed it) runs as Network type Host. You simply forward 32400 to 32400 (Server's IP address) If you've changed Plex to run as BR0, (don't know why you would), then you can assign it a specific IP address in the 192 range If you've changed Plex to run as bridge (really don't know why you would) then yes the internal IP of 172.xx would change, but the net result is the same you still forward to the server's IP and the docker system takes care of the rest. If you still have problems, then post the applicable xml file from /config/plugins/dockerMan/templates-user/my-plex.xml on the flash drive here
    2 points
  24. https://www.youtube.com/watch?v=LRfhSmwS3zs I recently interviewed Ed from the YouTube channel Space Invader One. While we discussed Ed's background and his channel, the focus of the interview is on the main topic Ed covers: Instructional videos about the software UnRAID that allows you to turn any PC into a Network Attached Storage, or "NAS" device. While at first, that might seem like something only IT people would use for backups, the retro gaming world is on the cusp of adopting NAS methods for rom storage and streaming - We're not quite there yet, but if you'd like to get ahead of the curve, definitely check this video out and start experimenting with an old PC and an old set of hard drives. (RetroRGB)
    2 points
  25. Lutris is coming... Along with Retroarch. I have been debugging and sorting out the controller issues this week. I have found the solution and am just working on the best way to get that to Unraid users. @ich777 has kindly offered to create a plugin that will add the required kernel modules to Unraid that this container needs to fix the controller issues. Once this is sorted I will publish an updated template in CA that will include all the new container features and a description in the first post of this thread of what needs to be installed.
    2 points
  26. The following script creates incremental backups by using rsync. Check the settings to define your own paths. Donate? 🤗 #!/bin/bash # ##################################### # Name: rsync Incremental Backup # Description: Creates incremental backups and deletes outdated versions # Author: Marc Gutt # Version: 1.3 # # Changelog: # 1.3 # - Fixed typo which prevented deleting skipped backups # - Fixed typo in notify function which returned wrong importance level # - Better error reporting # - Added support for SSH sources # - Fixed bug while creating destination path on remote server # - Empty dir is not a setting anymore # - Logfile is now random to avoid race conditions # - Delete logfile of skipped backups # 1.2 # - Fixed typo in new "backup is older than..." feature # 1.1 # - Fixed copying log file although backup has been skipped # - Fixed deleting empty_dir while creating destination path # - Create notification if last backup is older than X days (something went wrong for a long time) # 1.0 # - Allow setting "--dry-run" as rsync option (which skips some parts of the script) # - Create destination path if it does not exist # - Fixed wrong minimum file count check # - Fixed broken recognition of remote "mv" command # 0.9 # - Fixed wrong backup path in some situations # - User-defined replacement rules for backup path # - new setting "skip_error_host_went_down" which skips backups if host went down during file transfers # - Important Change: /source/<timestamp>/source has been changed to /source/<timestamp> # - Fixed wrong counting on keeped backups if multiple source paths have been set # 0.8 # - Fixed wrong path while making backups visible on SSH targets # 0.7 # - Empty backups stay invalid (must include at least X files) # - Fixed growing log file problem # - Logs are now located in the backup dir itself # - Added support for SSH destinations (replaced find, rm and mv commands against pure rsync hacks) # - User-defined rsync options # - User can exclude directories, defaults are /Temp, /Tmp and /Cache # - Enhanced user settings (better description and self-explaning variable names) # - Multi-platform support (should now work with Unraid, Ubuntu, Synology...) # - Replaced potentially unsafe "rm -r" command against rsync # - User-defined rsync command to allow optional sshpass support # - Keep multiple backups of a day only of the last X days (default 1) # - Important Change: The latest backup of a month is kept as monthly backup (in the past it was only the backup of the 1st of a month) # - Important Change: The latest backup of a year is kept as yearly backup (in the past it was only the backup of the 1st january of a year) # # Todo: # - chunksync hardlinks for huge files (like images) # - docker auto stop and start for consistent container backups (compare container volumes against source paths) # - what happens if backup source disappears while creating backup (like a mounted smb share which goes offline) # - rare case scenario: log filename is not atomic # - test on very first backup if destination supports hardlinks # - resume if source went offline during last backup # ##################################### # ##################################### # Settings # ##################################### # backup source to destination backup_jobs=( # source # destination "/mnt/user/Music" "/mnt/user/Backups/Shares/Music" "user@server:/home/Maria/Photos" "/mnt/user/Backups/server/Maria/Photos" "/mnt/user/Documents" "user@server:/home/Backups/Documents" ) # keep backups of the last X days keep_days=14 # keep multiple backups of one day for X days keep_days_multiple=1 # keep backups of the last X months keep_months=12 # keep backups of the last X years keep_years=3 # keep the most recent X failed backups keep_fails=3 # rsync options which are used while creating the full and incremental backup rsync_options=( # --dry-run --archive # same as --recursive --links --perms --times --group --owner --devices --specials --human-readable # output numbers in a human-readable format --itemize-changes # output a change-summary for all updates --exclude="[Tt][Ee][Mm][Pp]/" # exclude dirs with the name "temp" or "Temp" or "TEMP" --exclude="[Tt][Mm][Pp]/" # exclude dirs with the name "tmp" or "Tmp" or "TMP" --exclude="Cache/" # exclude dirs with the name "Cache" ) # notify if the backup was successful (1 = notify) notification_success=0 # notify if last backup is older than X days notification_backup_older_days=30 # create destination if it does not exist create_destination=1 # backup does not fail if files vanished during transfer https://linux.die.net/man/1/rsync#:~:text=vanished skip_error_vanished_source_files=1 # backup does not fail if source path returns "host is down". # This could happen if the source is a mounted SMB share, which is offline. skip_error_host_is_down=1 # backup does not fail if file transfers return "host is down" # This could happen if the source is a mounted SMB share, which went offline during transfer skip_error_host_went_down=1 # backup does not fail, if source path does not exist, which for example happens if the source is an unmounted SMB share skip_error_no_such_file_or_directory=1 # a backup fails if it contains less than X files backup_must_contain_files=2 # a backup fails if more than X % of the files couldn't be transfered because of "Permission denied" errors permission_error_treshold=20 # user-defined rsync command #alias rsync='sshpass -p "<password>" rsync -e "ssh -o StrictHostKeyChecking=no"' # user-defined ssh command #alias ssh='sshpass -p "<password>" ssh -o "StrictHostKeyChecking no"' # ##################################### # Script # ##################################### # make script race condition safe if [[ -d "/tmp/${0//\//_}" ]] || ! mkdir "/tmp/${0//\//_}"; then echo "Script is already running!" && exit 1; fi; trap 'rmdir "/tmp/${0//\//_}"' EXIT; # allow usage of alias commands shopt -s expand_aliases # functions remove_last_slash() { [[ "${1%?}" ]] && [[ "${1: -1}" == "/" ]] && echo "${1%?}" || echo "$1"; } notify() { echo "$2" if [[ -f /usr/local/emhttp/webGui/scripts/notify ]]; then /usr/local/emhttp/webGui/scripts/notify -i "$([[ $2 == Error* ]] && echo alert || echo normal)" -s "$1 ($src_path)" -d "$2" -m "$2" fi } # check user settings backup_path=$(remove_last_slash "$backup_path") [[ "${rsync_options[*]}" == *"--dry-run"* ]] && dryrun=("--dry-run") # check if rsync exists ! command -v rsync &> /dev/null && echo "rsync command not found!" && exit 1 # check if sshpass exists if it has been used echo "$(type rsync) $(type ssh)" | grep -q "sshpass" && ! command -v sshpass &> /dev/null && echo "sshpass command not found!" && exit 1 # set empty dir empty_dir="/tmp/${0//\//_}" # loop through all backup jobs for i in "${!backup_jobs[@]}"; do # get source path and skip to next element ! (( i % 2 )) && src_path="${backup_jobs[i]}" && continue # get destination path dst_path="${backup_jobs[i]}" # check user settings src_path=$(remove_last_slash "$src_path") dst_path=$(remove_last_slash "$dst_path") # get ssh login and remote path ssh_login=$(echo "$dst_path" | grep -oP "^.*(?=:)") remote_dst_path=$(echo "$dst_path" | grep -oP "(?<=:).*") if [[ ! "$remote_dst_path" ]]; then ssh_login=$(echo "$src_path" | grep -oP "^.*(?=:)") fi # create timestamp for this backup new_backup="$(date +%Y%m%d_%H%M%S)" # create log file log_file="$(mktemp)" exec &> >(tee "$log_file") # obtain last backup if last_backup=$(rsync --dry-run --recursive --itemize-changes --exclude="*/*/" --include="[0-9]*/" --exclude="*" "$dst_path/" "$empty_dir" 2>&1); then last_backup=$(echo "$last_backup" | grep -oP "[0-9_/]*" | sort -r | head -n1) # create destination path elif echo "$last_backup" | grep -q "No such file or directory" && [[ "$create_destination" == 1 ]]; then unset last_backup last_include if [[ "$remote_dst_path" ]]; then mkdir -p "$empty_dir$remote_dst_path" || exit 1 else mkdir -p "$empty_dir$dst_path" || exit 1 fi IFS="/" read -r -a includes <<< "${dst_path:1}" for j in "${!includes[@]}"; do includes[j]="--include=$last_include/${includes[j]}" last_include="${includes[j]##*=}" done rsync --itemize-changes --recursive "${includes[@]}" --exclude="*" "$empty_dir/" "/" find "$empty_dir" -mindepth 1 -type d -empty -delete else rsync_errors=$(grep -Pi "rsync:|fail|error:" "$log_file" | tail -n3) notify "Could not obtain last backup!" "Error: ${rsync_errors//[$'\r\n'=]/ } ($rsync_status)!" continue fi # create backup echo "# #####################################" # incremental backup if [[ "$last_backup" ]]; then echo "last_backup: '$last_backup'" # warn user if last backup is really old last_backup_days_old=$(( ($(date +%s) - $(date +%s -d "${last_backup:0:4}${last_backup:4:2}${last_backup:6:2}")) / 86400 )) if [[ $last_backup_days_old -gt $notification_backup_older_days ]]; then notify "Last backup is too old!" "Error: The last backup is $last_backup_days_old days old!" fi # rsync returned only the subdir name, but we need an absolute path last_backup="$dst_path/$last_backup" echo "Create incremental backup from $src_path to $dst_path/$new_backup by using last backup $last_backup" # remove ssh login if part of path last_backup="${last_backup/$(echo "$dst_path" | grep -oP "^.*:")/}" rsync "${rsync_options[@]}" --stats --delete --link-dest="$last_backup" "$src_path/" "$dst_path/.$new_backup" # full backup else echo "Create full backup from $src_path to $dst_path/$new_backup" rsync "${rsync_options[@]}" --stats "$src_path/" "$dst_path/.$new_backup" fi # check backup status rsync_status=$? # obtain file count of rsync file_count=$(grep "^Number of files" "$log_file" | cut -d " " -f4) file_count=${file_count//,/} [[ "$file_count" =~ ^[0-9]+$ ]] || file_count=0 echo "File count of rsync is $file_count" # success if [[ "$rsync_status" == 0 ]]; then message="Success: Backup of $src_path was successfully created in $dst_path/$new_backup ($rsync_status)!" # source path is a mounted SMB server which is offline elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" == 0 ]] && [[ $(grep -c "Host is down (112)" "$log_file") == 1 ]]; then message="Skip: Backup of $src_path has been skipped as host is down" [[ "$skip_error_host_is_down" != 1 ]] && message="Error: Host is down!" elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" -gt 0 ]] && [[ $(grep -c "Host is down (112)" "$log_file") == 1 ]]; then message="Skip: Backup of $src_path has been skipped as host went down" [[ "$skip_error_host_went_down" != 1 ]] && message="Error: Host went down!" # source path is wrong (maybe unmounted SMB server) elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" == 0 ]] && [[ $(grep -c "No such file or directory (2)" "$log_file") == 1 ]]; then message="Skip: Backup of $src_path has been skipped as source path does not exist" [[ "$skip_error_no_such_file_or_directory" != 1 ]] && message="Error: Source path does not exist!" # check if there were too many permission errors elif [[ "$rsync_status" == 23 ]] && grep -c "Permission denied (13)" "$log_file"; then message="Warning: Some files had permission problems" permission_errors=$(grep -c "Permission denied (13)" "$log_file") error_ratio=$((100 * permission_errors / file_count)) # note: integer result, not float! if [[ $error_ratio -gt $permission_error_treshold ]]; then message="Error: $permission_errors/$file_count files ($error_ratio%) return permission errors ($rsync_status)!" fi # some source files vanished elif [[ "$rsync_status" == 24 ]]; then message="Warning: Some files vanished" [[ "$skip_error_vanished_source_files" != 1 ]] && message="Error: Some files vanished while backup creation ($rsync_status)!" # all other errors are critical else rsync_errors=$(grep -Pi "rsync:|fail|error:" "$log_file" | tail -n3) message="Error: ${rsync_errors//[$'\r\n'=]/ } ($rsync_status)!" fi # backup remains or is deleted depending on status # delete skipped backup if [[ "$message" == "Skip"* ]]; then echo "Delete $dst_path/.$new_backup" rsync "${dryrun[@]}" --recursive --delete --include="/.$new_backup**" --exclude="*" "$empty_dir/" "$dst_path" # check if enough files have been transferred elif [[ "$message" != "Error"* ]] && [[ "$file_count" -lt "$backup_must_contain_files" ]]; then message="Error: rsync transferred less than $backup_must_contain_files files! ($message)!" # keep successful backup elif [[ "$message" != "Error"* ]]; then echo "Make backup visible ..." # remote backup if [[ "$remote_dst_path" ]]; then # check if "mv" command exists on remote server as it is faster if ssh -n "$ssh_login" "command -v mv &> /dev/null"; then echo "... through remote mv (fast)" [[ "${dryrun[*]}" ]] || ssh "$ssh_login" "mv \"$remote_dst_path/.$new_backup\" \"$remote_dst_path/$new_backup\"" # use rsync (slower) else echo "... through rsync (slow)" # move all files from /.YYYYMMDD_HHIISS to /YYYYMMDD_HHIISS if ! rsync "${dryrun[@]}" --delete --recursive --backup --backup-dir="$remote_dst_path/$new_backup" "$empty_dir/" "$dst_path/.$new_backup"; then message="Error: Could not move content of $dst_path/.$new_backup to $dst_path/$new_backup!" # delete empty source dir elif ! rsync "${dryrun[@]}" --recursive --delete --include="/.$new_backup**" --exclude="*" "$empty_dir/" "$dst_path"; then message="Error: Could not delete empty dir $dst_path/.$new_backup!" fi fi # use local renaming command else echo "... through local mv" [[ "${dryrun[*]}" ]] || mv -v "$dst_path/.$new_backup" "$dst_path/$new_backup" fi fi # notification if [[ $message == "Error"* ]]; then notify "Backup failed!" "$message" elif [ "$notification_success" == 1 ]; then notify "Backup done." "$message" fi # loop through all backups and delete outdated backups echo "# #####################################" echo "Clean up outdated backups" unset day month year day_count month_count year_count while read -r backup_name; do # failed backups if [[ "${backup_name:0:1}" == "." ]] && ! [[ "$backup_name" =~ ^[.]+$ ]]; then if [[ "$keep_fails" -gt 0 ]]; then echo "Keep failed backup: $backup_name" keep_fails=$((keep_fails-1)) continue fi echo "Delete failed backup: $backup_name" # successful backups else last_year=$year last_month=$month last_day=$day year=${backup_name:0:4} month=${backup_name:4:2} day=${backup_name:6:2} # all date parts must be integer if ! [[ "$year$month$day" =~ ^[0-9]+$ ]]; then echo "Error: $backup_name is not a backup!" continue fi # keep all backups of a day if [[ "$day_count" -le "$keep_days_multiple" ]] && [[ "$last_day" == "$day" ]] && [[ "$last_month" == "$month" ]] && [[ "$last_year" = "$year" ]]; then echo "Keep multiple backups per day: $backup_name" continue fi # keep daily backups if [[ "$keep_days" -gt "$day_count" ]] && [[ "$last_day" != "$day" ]]; then echo "Keep daily backup: $backup_name" day_count=$((day_count+1)) continue fi # keep monthly backups if [[ "$keep_months" -gt "$month_count" ]] && [[ "$last_month" != "$month" ]]; then echo "Keep monthly backup: $backup_name" month_count=$((month_count+1)) continue fi # keep yearly backups if [[ "$keep_years" -gt "$year_count" ]] && [[ "$last_year" != "$year" ]]; then echo "Keep yearly backup: $backup_name" year_count=$((year_count+1)) continue fi # delete outdated backups echo "Delete outdated backup: $backup_name" fi # ssh if [[ "$remote_dst_path" ]]; then if ssh -n "$ssh_login" "command -v rm &> /dev/null"; then echo "... through remote rm (fast)" [[ "${dryrun[*]}" ]] || ssh "$ssh_login" "rm -r \"${remote_dst_path:?}/${backup_name:?}\"" else echo "... through rsync (slow)" rsync "${dryrun[@]}" --recursive --delete --include="/$backup_name**" --exclude="*" "$empty_dir/" "$dst_path" fi # local (rm is 50% faster than rsync) else [[ "${dryrun[*]}" ]] || rm -r "${dst_path:?}/${backup_name:?}" fi done < <(rsync --dry-run --recursive --itemize-changes --exclude="*/*/" --include="[.0-9]*/" --exclude="*" "$dst_path/" "$empty_dir" | grep -oP "[.0-9_]*" | sort -r) # move log file to destination log_path=$(rsync --dry-run --itemize-changes --include=".$new_backup/" --include="$new_backup/" --exclude="*" --recursive "$dst_path/" "$empty_dir" | cut -d " " -f 2) [[ $log_path ]] && rsync "${dryrun[@]}" --remove-source-files "$log_file" "$dst_path/$log_path/$new_backup.log" [[ -f "$log_file" ]] && rm "$log_file" done Explanations All created backups are full backups with hardlinks to already existing files (~ incremental backup) All backups use the most recent backup to create hardlinks or new files. Deleted files are not copied (1:1 backup) There are no dependencies between the most recent backup and the previous backups. You can delete as many backups as you like. All backups that are left, are still full backups. This could be confusing as most incremental backup softwares need the previous backups for restoring the data. But this is not valid for rsync and hardlinks. Read here if you need more informations about links, inodes and files. After a backup has been created the script purges the backup dir and keeps only the backups of the last 14 days, 12 month and 3 years, which can be defined through the settings logs can be found inside of each backup folder Sends notifications after job execution How to execute this script? Use the User Scripts Plugin (Unraid Apps) to execute it by schedule Use the Unassigned Devices Plugin (Unraid Apps) to execute it after mounting a USB drive How does a backup look like? This is how the backup dir looks like after several month (it kept the backups of 2020-07-01, 2020-08-01 ... and all backups of the last 14 days): And as it's an incremental backup, the storage usage is low: (as you can see I bought new music before "2020-08-01" and before "2020-10-01"): du -d1 -h /mnt/user/Backup/Shares/Music | sort -k2 168G /mnt/user/Backup/Shares/Music/20200701_044011 4.2G /mnt/user/Backup/Shares/Music/20200801_044013 3.8M /mnt/user/Backup/Shares/Music/20200901_044013 497M /mnt/user/Backup/Shares/Music/20201001_044014 4.5M /mnt/user/Backup/Shares/Music/20201007_044016 4.5M /mnt/user/Backup/Shares/Music/20201008_044015 4.5M /mnt/user/Backup/Shares/Music/20201009_044001 4.5M /mnt/user/Backup/Shares/Music/20201010_044010 4.5M /mnt/user/Backup/Shares/Music/20201011_044016 4.5M /mnt/user/Backup/Shares/Music/20201012_044020 4.5M /mnt/user/Backup/Shares/Music/20201013_044014 4.5M /mnt/user/Backup/Shares/Music/20201014_044015 4.5M /mnt/user/Backup/Shares/Music/20201015_044015 4.5M /mnt/user/Backup/Shares/Music/20201016_044017 4.5M /mnt/user/Backup/Shares/Music/20201017_044016 4.5M /mnt/user/Backup/Shares/Music/20201018_044008 4.5M /mnt/user/Backup/Shares/Music/20201018_151120 4.5M /mnt/user/Backup/Shares/Music/20201019_044002 172G /mnt/user/Backup/Shares/Music Warnings Its not the best idea to backup huge files like disk images that changes often as the whole file will be copied. A file change while copying it through rsync will cause a corrupted file as rsync does not lock files. If you like to backup for example a VM image file or Container database, stop it first (to avoid further writes), before executing this script! Never change a file, which is inside a backup directory. This changes all files in all backups (this is how hardlinks work)!
    2 points
  27. Easiest way to tell if stuff has been uploaded into the cloud would be to check your mount_rclone folder (as this is basically a mapped folder to GDrive directly) To answer your first question, yes, you can copy stuff manually to your mergefs folder. I've noticed for my though when I do it copies at like 10MB/sec but if I copy to my "local" folder directly (by manually creating the music folder for example) it copies at gigabit speeds to then be uploaded.
    2 points
  28. Loud and clear. Looks like i'll just be passing on block devices in that case and stay away from fileIO images. I don't feel very confident about software-raiding multiple iscsi devices on windows so i'll just stick with individual drives. Anyways, when speed is really of the essence i'll stick to local nvme storage on the gaming machine. The iSCSI extra storage is so i can keep more of my steam library permanently installed. And iscsi is certainly a lifesaver here because both Origin and nvidia gamestream don't play nice with games running from regular network shares. With iscsi, not a problem. Now it's time to go break my btrfs pool and pass on some block storage Cheers!
    2 points
  29. I updated all the "foreign" translation repos to the latest English (en_US) source files.
    2 points
  30. It is a bad idea to post the address of your server on a public place on the Internet. I edited that for you.
    2 points
  31. A tip for those who are impatient... To get the latest version of a language pack, simply remove the existing language pack and install it again. This will automatically install the latest available version.
    2 points
  32. Hum, it's complicated, so I did not vote. I generally use Chrome, but I access Unraid through Firefox so that I keep my general web-browsing and Unraid management separate.
    2 points
  33. Having issues getting latest release working? - If pihole running but status in UI/API is not shoiwn active, add env. variable 'DNSMASQ_USER' with value = 'root'. This should be fixed soon in a new release, they are working on it, so remove this variable again once new update released and see if it's no longer required. - Other issues? Do a review of all of your docker template env. variables vs. the current recommend and optional ones here https://github.com/pi-hole/docker-pi-hole. There have been a lot of changes recently that haven't been reflected in Unraid automatically. I suggest removing old ones no longer in use or only optional and see if that fixes issues, then add back any you know you need. The changes in the DNS ones to a single PIHOLE_DNS one was a big change recently to watch for.
    2 points
  34. Hello Unraid Community! It has come to our attention that in recent days, we've seen a significant uptick in the amount of Unraid server's being compromised due to poor security practices. The purpose of this post is to help our community verify their server's are secure and provide helpful best-practices recommendations to ensuring your system doesn't become another statistic. Please review the below recommendations on your server(s) to ensure they are safe. Set a strong root password Similar to many routers, Unraid systems do not have a password set by default. This is to ensure you can quickly and easily access the management console immediately after initial installation. However, this doesn't mean you shouldn't set one. Doing this is simple. Just navigate to the Users tab and click on root. Now set a password. From then on, you will be required to authenticate anytime you attempt to login to the webGui. In addition, there is a plugin available in Community Apps called Dynamix Password Validator. This plugin will provide guidance on how strong of a password you're creating based on complexity rules (how many capital vs. lowercase letters, numbers, symbols, and overall password length are used to judge this). Consider installing this for extra guidance on password strength. Review port mappings on your router Forwarding ports to your server is required for specific services that you want to be Internet-accessible such as Plex, FTP servers, game servers, VoIP servers, etc. But forwarding the wrong ports can expose your server to significant security risk. Here are just a few ports you should be extra careful with when forwarding: Port 80: Used to access the webGui without SSL (unless you've rebound access to another port on the Management Access settings page). DO NOT forward port 80. Forwarding this port by default will allow you to access the webGui remotely, but without SSL securing the connection, devices in between your browser and the server could "sniff" the packets to see what you're doing. If you want to make the webGui remotely accessible, install the Unraid.net plugin to enable My Servers on your system, which can provide a secure remote access solution that utilizes SSL to ensure your connection is fully encrypted. Port 443: Used to access the webGui with SSL. This is only better than port 80 if you have a root password set. If no root password is set and you forward this port, unauthorized users can connect to your webGui and have full access to your server. In addition, if you forward this port without using the Unraid.net plugin and My Servers, attempts to connect to the webGui through a browser will present a security warning due to the lack of an SSL certificate. Consider making life easier for yourself and utilize Unraid.net with My Servers to enable simple, safe, and secure remote access to your Unraid systems. NOTE: When setting up Remote Access in My Servers, we highly recommend you choose a random port over 1000 rather than using the default of 443. Port 445: Used for SMB (shares). If you forward this port to your server, any public shares can be connected to by any user over the internet. Generally speaking, it is never advisable to expose SMB shares directly over the internet. If you need the ability to access your shares remotely, we suggest utilizing a Wireguard VPN to create a secure tunnel between your device and the server. In addition, if the flash device itself is exported using SMB and this port is forwarded, its contents can easily be deleted and your paid key could easily be stolen. Just don't do this. Port 111/2049: Used for NFS (shares). While NFS is disabled by default, if you are making use of this protocol, just make sure you aren't forwarding these ports through your router. Similar to SMB, just utilize Wireguard to create a secure tunnel from any remote devices that need to connect to the server over NFS. Port 22/23: Used by Telnet and SSH for console access. Especially dangerous for users that don't have a root password set. Similar to SMB, we don't recommend forwarding these ports at all, but rather, suggest users leverage a Wireguard VPN connection for the purposes of connecting using either of these protocols. Ports in the 57xx range: These ports are generally used by VMs for VNC access. While you can forward these ports to enable VNC access remotely for your VMs, the better and easier way to do this is through installing the Unraid.net plugin and enabling My Servers. This ensures that those connections are secure via SSL and does not require individual ports to be forwarded for each VM. Generally speaking, you really shouldn't need to forward many ports to your server. If you see a forwarding rule you don't understand, consider removing it, see if anyone complains, and if so, you can always put it back. Never ever ever put your server in the DMZ No matter how locked down you think you have your server, it is never advisable to place it in the DMZ on your network. By doing so, you are essentially forwarding every port on your public IP address to your server directly, allowing all locally accessible services to be remotely accessible as well. Regardless of how "locked down" you think you actually have the server, placing it in the DMZ exposes it to unnecessary risks. Never ever do this. Consider setting shares to private with users and passwords The convenience of password-less share access is pretty great. We know that and its why we don't require you to set passwords for your shares. However, there is a security risk posed to your data when you do this, even if you don't forward any ports to your server and have a strong root password. If another device on your network such as a PC, Mac, phone, tablet, IoT device, etc. were to have its security breached, it could be used to make a local connection to your server's shares. By default, shares are set to be publicly readable/writeable, which means those rogue devices can be used to steal, delete, or encrypt the data within them. In addition, malicious users could also use this method to put data on your server that you don't want. It is for these reasons that if you are going to create public shares, we highly recommend setting access to read-only. Only authorized users with a strong password should be able to write data to your shares. Don't expose the Flash share, and if you do, make it private The flash device itself can be exposed over SMB. This is convenient if you need to make advanced changes to your system such as modifying the go file in the config directory. However, the flash device itself contains the files needed to boot Unraid as well as your configuration data (disk assignments, shares, etc). Exposing this share publicly can be extremely dangerous, so we advise against doing so unless you absolutely have to, and when you do, it is advised to do so privately, requiring a username and password to see and modify the contents. Keep your server up-to-date Regardless of what other measures you take, keeping your server current with the latest release(s) is vital to ensuring security. There are constant security notices (CVEs) published for the various components used in Unraid OS. We here at Lime Technology do our best to ensure all vulnerabilities are addressed in a timely manner with software updates. However, these updates are useless to you if you don't apply them in a timely manner as well. Keeping your OS up-to-date is easy. Just navigate to Tools > Update OS to check for and apply any updates. You can configure notifications to prompt you when a new update is available from the Settings > Notifications page. More Best Practices Recommendations Set up and use WireGuard, OpenVPN or nginxProxyManager for secure remote access to your Shares. For WireGuard set up, see this handy getting started guide. Set up 2FA on your Unraid Forum Account. Set up a Remote Syslog Server. Install the Fix Common Problems plugin. Installing this plugin will alert you to multiple failed login attempts and much, much more. Change your modem password to something other than the default. Consider installing ClamAV. In addition to all of the above recommendations, we've asked SpaceInvaderOne to work up a video with even more detailed best-practices related to Unraid security. We'll post a link as soon as the video is up to check out what other things you can do to improve your system security. It is of vital importance that all users review these recommendations on their systems as soon as possible to ensure that you are doing all that is necessary to protect your data. We at Lime Technology are committed to keeping Unraid a safe and secure platform for all of your personal digital content, but we can only go so far in this effort. It is ultimately up to you the user to ensure your network and the devices on it are adhering to security best-practices.
    2 points
  35. The server verifies logged users against the Minecraft main servers to make sure they are licenced... If for some reason, your server can't reach Mojang servers, the user will be refused... To mitigate that, you can change a line in your "server.properties" file, in appdata to false online-mode=true I am new to this (been running my server for less than 2 weeks), but this source helped me a lot with setting up the server: https://minecraft.fandom.com/wiki/Server.properties
    2 points
  36. You do not create a variable called Extra Parameters. It is already part of the template. You must turn on Advanced View (Basic/Advanced View toggle) in the container template to see it. Since I have an Intel CPU with an iGPU I use that for transcoding as seen below. You need the --runtime-=nvidia in Extra Parameters.
    2 points
  37. I released version 1.3: # - Fixed typo which prevented deleting skipped backups # - Fixed typo in notify function which returned wrong importance level # - Better error reporting # - Added support for SSH sources # - Fixed bug while creating destination path on remote server # - Empty dir is not a setting anymore # - Logfile is now random to avoid race conditions # - Delete logfile of skipped backups As you can see it's now possible to backup SSH sources, which is shown as an example in the settings: # backup source to destination backup_jobs=( # source # destination "/mnt/user/Music" "/mnt/user/Backups/Shares/Music" "user@server:/home/Maria/Photos" "/mnt/user/Backups/server/Maria/Photos" "/mnt/user/Documents" "user@server:/home/Backups/Documents" )
    2 points
  38. Pretty sure that linuxserver no longer supports this app at all. BUT, from what I remember years ago you had to initially set up how the library gets scanned etc on another "real" Kodi instance and then copy over advancedsettings.xml and sources.xml from that instance to the headless one and then you use the headless app as the target when you tell radarr etc to update the library. You're also going to odds on use MySQL / MariaDB to hold the database More info here on that app thats currently in CA https://github.com/matthuisman/docker-kodi-headless But to be honest unless things have changed Kodi-Headless was always a "hack" that mostly worked and you're better off just having your Kodi boxes set to rescan the library when starting and the library is stored on a MariaDB docker instead of internally.
    2 points
  39. Hi All, I've got liquidctl packaged up in a python wheel. Switched smbus library for smbus2 as slackware and dev tools for unraid don't have an implemenation. Other prerequisites are the ones listed in liquidctl install instructions. pip, python will need be install with https://forums.unraid.net/topic/35866-unraid-6-nerdpack-cli-tools-iftop-iotop-screen-kbd-etc/ The wheel has all the required libraries included. Symbus 2 is all python. Git clone/download zip from https://github.com/thecutehacker/liquidctl---Unraid and in the command line go to the liquidctl/dist folder and type pip install liquidctl-1.7.2-py3-none-any.whl Hopefully I'll have time to put it into a plugin soon. Installing it is only temporary and will be lost on reboot. Could always have a startup script that reinstalls it from an existing user share/storage. Hopefully this is useful for someone!
    2 points
  40. @ghost82 Thanks, excellent article on the topic of Windows IRQs. I have followed the technical work of Mark Russinovich since the mid 90s; he is a master of the deep inner workings of the Windows OS.
    2 points
  41. Will see if I can add something over the holidays.
    2 points
  42. Fixed with PR please pull image down in an hour from now Sent from my CLT-L09 using Tapatalk
    2 points
  43. Thought it would be worth while to add a reason to this. I'm using both the Unraid Wireguard implementation, as well as this container. Unraid for the reasons you'd expect. This container, I use to give VPN access into an isolated docker network that contains services for friends and family to use. This way, I didn't have to expose tons of ports to the open internet. I know people are going to read this and say "well, just use a reverse proxy", but that wasn't possible for these services ( dedicated game servers, voip servers, the like), and as such this gives us a safer implementation. Props to @SmartPhoneLover for this container, really love it!
    2 points
  44. Below I include my Unraid (Version: 6.10.0-rc1) "Samba extra configuration". This configuration is working well for me accessing Unraid shares from macOS Monterey 12.0.1 I expect these configuration parameters will work okay for Unraid 6.9.2. The "veto" commands speed-up performance to macOS by disabling Finder features (labels/tags, folder/directory views, custom icons etc.) so you might like to include or exclude these lines per your requirements. Note, there are problems with samba version 4.15.0 in Unraid 6.10.0-rc2 causing unexpected dropped SMB connections… (behavior like this should be anticipated in pre-release) but fixes expected in future releases. This configuration is based on a Samba configuration recommended for macOS users from 45Drives here: KB450114 – MacOS Samba Optimization. #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native spotlight backend = tracker [data01] path = /mnt/user/data01 veto files = /._*/.DS_Store/ delete veto files = yes spotlight = yes My Unraid share is "data01". Give attention to modifying the configuration for your particular shares (and other requirements). I hope providing this might help others to troubleshoot and optimize SMB for macOS.
    2 points
  45. Create a Unraid container of PlexTraktSync: Go to the Docker section, under "Docker Containers" and click "Add Container". Click the advanced view to see all of the available parameters. Leave the template blank/unselected. Under Name: enter a name for the docker (e.g., PlexTraktSync). Under Repository: enter ghcr.io/taxel/plextraktsync:latest (or whatever tag you want). Under Extra Parameters: enter -it for interactive mode. Click "Apply". The container should start automatically. If not, start it. Enter the console for the container. Enter python3 -m plextraktsync to start the credential process described above.
    2 points
  46. I didnt find a solution to this problem with the SMB share, but I have found a work around for now. on my Mac I had to go into Settings --> Security & Privacy --> Full Disk Access and approve Terminal.app. That is in prep for the next part. Mount your Unraid SMB share ("SMB-TM-UNRAID" in this example) to the Mac client. open terminal and run: sudo hdiutil create -size 300g -type SPARSEBUNDLE -nospotlight -volname "SMBTimeMachine" -fs "Case-sensitive Journaled HFS+" -verbose /Volumes/SMB-TM-UNRAID/$HOST_TimeMachine.sparsebundle Navigate to the directory /Volumes/SMB-TM-UNRAID/ in finder and click the .sparcebundle file to mount it as a disk. This will mount a new disk called SMBTimeMachine to your Mac. open terminal and run: sudo tmutil setdestination -a "/Volumes/SMBTimeMachine/" open Time Machine utility and configure. Youll need to configure Mac to remount these shares at boot. Settings --> Users & Groups --> Your User --> Login Items. Drag the mounted drives SMB-TM-UNRAID and SMBTimeMachine into the login items window.
    2 points
  47. Here's how I set up Home Assistant OS as an unraid VM today. I used one additional step, converting the qcow2 image to raw thin-provisioned. I don't know what the performance implications are of qcow2 versus raw, but I figure raw must be unraid's default for a reason. 1. In the unraid GUI, create a new Linux VM with 2 CPUs and 2 GB RAM (official recommended spec as of late 2021) and leave everything else at defaults, but do not start it automatically yet. 2. Download the KVM image (.qcow2) from home-assistant.io 3. Extract the .qcow2 file out of the archive using 7-zip or whatever. 4. Put the .qcow2 file in the directory for the VM you just created. I used an SMB share to make this accessible from my Windows machine. 5. Open an bash shell on the unraid host and navigate to your virtual machine's home directory. 6. Delete the disk image that was created for your VM by default. rm disk1.img 7. Convert the qcow2 image to raw with this command (the "-o preallocation=off" part makes it thin-provisioned) but modify the paths of course to match your system. qemu-img convert -p -f qcow2 -O raw -o preallocation=off "/mnt/user/vm/virtualmachines/Home Assistant/haos_ova-6.5.qcow2" "/mnt/user/vm/virtualmachines/Home Assistant/vdisk1.img" Thats it. You can start the VM and it should boot straight into home assistant. I didn't need to increase the size of the virtual disk, but if you want to, first check its size with qemu-img info "/path/to/disk1.img" then for example to add 32 gigabytes to its capacity, I believe the command would be qemu-img resize "/path/to/disk1.img" +32G
    2 points
  48. Hey Guys, V Late responce but hopefully helps anyone having a google and stumbles upon this thread, I was wanting to import a HomeAssistant VM into Unraid running on a Intel NUC i5 In the VM config in the web UI; Create a new VM with the rough config that the image was set with. I had a ball park guess with the chipset and BIOS settings but found that Q35-4.2 and OVMF BIOS worked straight off the bad. Most VM's tend to just cop ram changes like a champ so wasnt too worried about it. In the Primary vdisk settings make sure the type is set to qcow2 and take note of where the image is stored. (Typically in the domains directory) You'll need to enable the domains share to access the directory. Set up what ever else you wish and UNCHECK the start VM on completion. Once done and created goto the tower's network shares and find the image (For me it was \\NUCRAID\domains\HassIO\vdisk1.img) Swap out your vdisk1.img with your qcow2 image and name it vdisk1 and replace the .qcow2 extension with .img (Will work because we get the VM to have a qcow2 formate instead of raw, Unraid still uses .img extensions for both) When done goto the VM manager and hit start and watch it spring to life. Enjoy and good luck fella's B
    2 points
  49. How do I limit the memory usage of a docker application? Personally, on my system I limit the memory of most of my docker applications so that there is always (hopefully) memory available for other applications / unRaid if the need arises. IE: if you watch CA's resource monitor / cAdvisor carefully when an application like nzbGet is unpacking / par-checking, you will see that its memory used skyrockets, but the same operation can take place in far less memory (albeit at a slightly slower speed). The memory used will not be available to another application such as Plex until after the unpack / par check is completed. To limit the memory usage of a particular app, add this to the extra parameters section of the app when you edit / add it: --memory=4G This will limit the memory of the application to a maximum of 4G
    2 points