Leaderboard

Popular Content

Showing content with the highest reputation since 08/16/21 in Posts

  1. Refer to Summary of New Features for an overview of changes since version 6.9. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Credits Special thanks to all our beta testers and especially: @bonienl for his continued refinement and updating of the Dynamix webGUI. @Squid for continued refinement of Community Apps and associated feed. @dlandon for continued refinement of Unassigned Devices plugin and patience as we change things under the hood. @ich777 for assistance and passing on knowledge of Linux kernel config changes to support third party drivers and other kernel-related functionality via plugins. @SimonF for refinements to System Devices page and other webGUI improvements. @thohell for an extra set of eyes looking at md/unraid driver and for work-in-progress of adding changes to support multiple Unraid arrays. @JorgeB for rigorous testing of storage subsystem
    31 points
  2. ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
    26 points
  3. Well it was a nice thought, but we are still clearly having issues. You are probably in a state where the webgui loads and initially your name is shown in the upper right corner, but after a second it is replaced with "Sign In". It pains me to say this but for now please turn off the unraid-api. At a web terminal type: unraid-api stop In this mode, Remote Access and Flash Backup should remain active, but you will see a "graphql is offline" error message when you click on your name in the upper right corner. When you visit the My Servers Dashboard ( https://forums.unraid.net/my-servers/ ) your server will be shown as offline and there may be "Network error: failed to fetch" errors as well, but you can still click the Local and Remote access links and you can still download your flash backups. Unraid 6.10 users without the My Servers plugin will have no issues signing in or using the dashboard. We are working hard to restore full functionality.
    25 points
  4. The primary purpose of this release is to address an issue seen with many HP Microserver Gen8/9 servers (and other platforms) where data corruption could occur if Intel VT-d is enabled. ALL USERS are encouraged to update. As always, please make a flash back up before upgrading: Main/Flash/Flash Backup. While we have not identified the exact kernel commit that introduced this issue, we have identified a solution that involves changing the default IOMMU operational mode in the Linux kernel from "DMA Translation" to "Pass-through" (equivalent to "intel_iommu=pt" kernel option). At first, we thought the 'tg3' network driver was the culprit; however, upon thorough investigation, we think this is coincidental and we have also removed code that "blacklists" the tg3 driver. Special thanks to @JorgeB who helped characterize and report this issue, as well as helping many people recover data when possible. Please refer to the Unraid OS 6.10.3-rc1 announcement post for more information. Version 6.10.3 2022-06-14 Improvements Fixed data corruption issue which could occur on some platforms, notably HP Microserver Gen8/9, when Intel VT-d was enabled. This was fixed by changing the Linux kernel default IOMMU operation mode from "DMA Translation" to "Pass-through". Also removed 'tg3' blacklisting when Intel VT-d was enabled. This was added in an abundance of caution as all early reports of data corruption involved platforms which also (coincidentally) used 'tg3' network driver. If you created a blank 'config/modprobe.d/tg3.conf' file you may remove it. Plugin authors: A plugin file may include a tag which displays a markdown formatted message when a new version is available. Use this to give instructions or warnings to users before the upgrade is done. Brought back color-coding in logging windows. Bug fixes Fix issue detecting Mellanox NIC. Misc. webGUI bug fixes Change Log vs. Unraid OS 6.10.2 Base distro: no changes Linux kernel: version 5.15.46-Unraid CONFIG_IOMMU_DEFAULT_PASSTHROUGH: Passthrough Management: startup: improve network device detection webgui: Added color coding in log files webgui: In case of flash corruption try the test again webgui: Improved syslog reading webgui: Added log size setting when viewing syslog webgui: Plugin manager: add ALERT message function webgui: Add INFO icon to banner webgui: Added translations to PageMap page webgui: Fix: non-correcting parity check actually correcting if non-English language pack installed webgui: Updated azure/gray themes Better support for Firefox Move utilization and notification indicators to the right
    18 points
  5. Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
    16 points
  6. Hallo an alle! Wollte hier schon längst mal meinen Server vorstellen da ich das für schon längst fällig hielt und ich sonst irgendwie nie so richtig Zeit gefunden hab. Der Server besteht aus folgenden Komponenten: Case: NZXT H2 Classic (Frontblende wurde entfernt für besseren AirFlow) zusätzlicher HDD Cage: ICY Dock MB074SP-B (wird demnächst gegen ein MB074SP-1B mit Hot-Swap getauscht) CPU: Intel Core i5-10600 CPU Kühler: Noctua NH-U14S Motherboard: ASUS Z490-E GAMING RAM: 4x Corsair Vengeance LPX 16GB DDR4 @2666MT/s C16 Netzteil: Corsair RM850x Addon Karten: Mellanox ConnectX3 CX311A-XCAT 10Gbit/s SFP+ NIC 2x DigitalDevices Cine C/T v6 Dual Tuner TV Karten Dell Perc H310 LSI 9240-8i im HBA Modus Coral Dual Edge TPU (leider nur einer verfügbar da nur über PCIe x1 angebunden) Nvidia T400 2GB Speicher: 2x Samsung 970 Evo Plus 1TB ZFS Mirror (appdata, Docker, libvirt,...) 2x Crucial MX500 1TB als Cache Pool (Nextcloud Datenverzeichnis, unRAID Cache,...) 1x M2 NVMe Transcend 128GB (per VirtIO durchgereicht zu einer Debian VM zum bauen der Docker Container) 6x WD Reds/White Labels für das Array mit einer Parity (Debian aptitude Mirror, verschiedenste Mirror von Betriebssystemen, Private Cotnainer Registry, Medien...) 1x Industrial Samsung SSD 128GB (per VirtIO durchgereicht zu einer VM zum bauen der Plugin Pakete für unRAID) 1x WD Red Unassigned Devices (Nextcloud externe Speicher, Backups, nicht kritische Daten...) Boot Stick(s): 1x Transcend JetFlash 600 Extreme-Speed 32GB USB 2.0 (unRAID) 1x SanDisk 16GB Cruzer Blade USB 2.0 (durchgereicht zu einer unRAID VM) Der Server beherbergt außerdem auch noch ein Git Repo, Jenkins und wie schon oben erwähnt eine Debian VM & eine unRAID VM. Auf dem Server werden lokal alle meine Docker Container gebaut, werden danach zu DockerHub und nochmal auf den Server in eine Private Registry (sicher ist sicher ) hochgeladen. Wie schon oben erwähnt befindet sich auf dem Server noch eine unRAID VM die gestartet wird wenn eine neue Version von unRAID gefunden wird, diese wird dann automatisch auf die neue Version aktualisiert. Danach startet der Build Prozess für die verschiedensten Plugins die nach dem erfolgreichem build auf Github in das dementsprechende Repositor hochgeladen werden. Eine zusätzliche Routine wurde ebenso eingebaut die die unRAID VM startet wenn eine neue Version von ZFS, CoreFreq und Nvidia Treiber gefunden wird die diese Packages für die aktualle Release version von unRAID kompiliert und hochlädt. Momentan wird bei einem Build Vorgang, wenn eine neue unRAID Version gefunden wird, folgendes kompiliert: ZFS Package @steini84 USB Serial Package @SimonF USB IP Package @SimonF NCT 6687 Package Nvidia Treiber Package DigitalDevices Package LibreELEC Package TBS-OS Package Coral TPU Package Firewire Package CoreFreq AMD Package CoreFreq Intel Package AMD Vendor Reset Package HPSAHBA Package Sound Package (noch kein Release geplant) So ein Build Vorgang dauert ca. zwischen 35 und 45 Minuten, je nachdem wie viele Nvidia Treiber Version gebaut werden müssen, da mittlerweile mindestens zwei bzw. in Zukunft drei gebaut werden müssen: Production Branch New Feature Branch Beta Branch (nur falls vorhanden) 470.82.00 (letzte Treiberversion die Serie 600 und 700 unterstützt) Der Build Vorgang ist vollständig automatisiert und wird spätestens nach 15 Minuten nachdem eine neue unRAID Version Released wurde gestartet. Ein Hinweis zum Verbrauch, durschnittlich liegt die Systemlast beim Bild Vorgang bei ca. 180Watt für die 35 bis 45 Minuten, hab noch ein Bild von der Auslastung ganz unten hinzugefügt... 🙈 Nur zur Erklärung, diese Packages müssen für jede unRAID Version kompiliert/erstellt werden da die Module die dafür benötigt werden in Abhängigkeit zum Kernel der jeweiligen unRAID Version stehen, die Plugins erkennen eine Änderung der Kernel Version beim Booten und laden die Packages für die jeweilige Kernel Version herunter und werden dann beim Start auch gleich installiert. Das ist mitunter ein Grund warum ich gegen Virtualisierte Firewalls auf unRAID bzw. AdBlocker die auch unRAID mit einschließen bin, da ein herunterladen der Packages beim Start von unRAID dann nicht möglich ist weil eben keine Internetverbindung besteht bzw. der DNS Server (im Falle von AdBlockern) noch nicht verfügbar ist. Momentan überlege ich den Server mit einem i9-10850k auszustatten um den Build Vorgang nochmal zu verkürzen aber da diese CPU momentan schwer zu bekommen ist und auch nicht gerade billig ist muss das noch warten. Nicht praktikabel, spart nur ein paar Minuten ein. Ich hoffe euch hat die Servervorstellung und der kurze Einblick hinter die Kulissen wie so einiges bei mir auf dem Server funktioniert gefallen. Hier noch ein paar Bilder: Auslastung beim Build Vorgang, immer zwischen 90 und 100% :
    16 points
  7. LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. LIMITATIONS: Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work or not work properly currently. WORKAROUND: If you want to get a Ubuntu, Debian Bookworm+, Fedora 36,... container to run you have to add this line to your container configuration file at the end so that it actually is able to start: lxc.init.cmd = /lib/systemd/systemd systemd.unified_cgroup_hierarchy=1 Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.
    14 points
  8. Finally the new Macinabox is ready. Sorry for the delay work has been a F******G B*****D lately taking all my time. So It also has a new template so please make sure to make sure your template is updated too (but will work with the old template) A few new things added. Now has support for Monterey BigSur Catalina Mojave High Sierra. You will see more options in the new template. Now as well as being able to choose the vdisk size for install you can also choose whether the VM is created with a raw or qcow2 (my favorite!) vdisk The latest version of Open core (OpenCore to 0.7.7.) is in this release. I will try and update the container regularly with new versions as they come. However you will notice a new option in the template where you can choose to install with stock opencore (in the container) or you can use a custom one. You can add this in the folder custom_opencore folder in Macinabox appdata folder. You can download versions to put here from https://github.com/thenickdude/KVM-Opencore/releases Choose the .gz version from here and place in the above folder and set template to custom and it will use that (useful if i am slow in updating !! 🤣) - note if set to custom but Macinabox cant find a custom opencore here to unzip in this folder then it will use the stock one. Also there is another option to delete and replace the existing opencore image that your vm is using. Select this to yes and run the container and it will remove the opencore image from the macos version selected in the template and replace with a fresh one. Stock or custom. By default the NICs for the macOS versions are virtio for Monterey and BigSur. Also vDisk Bus is virtio for these too. HighSierra, Mojave and Catalina use a sata vDisk Bus. They also use e1000-82545em for there NICs. The correct NIC type for the "flavour" of OS you choose will automatically be added. However if for any macOS you want to overide the NIC type and install then you can change the default NIC type in the template from Virtio, Virtio-net, e1000-82545em or vmxnet3 By default the NIC for all vms is on <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> This will make the network adapter seem built in and should help with Apple services. Make sure to delete the macinabox helper script before running new macinabox so the new script is put in as there are some changes in that script too. I should be making some other chnages in the next few weeks soon, but thats all for now
    14 points
  9. I have a created a file manager plugin, which I will release when the next Unraid 6.10 version comes out, This plugin extends the already present Browse function of Unraid with file management operations, such as copy, move, rename, delete and download. Operations can be performed on either folders and/or files and objects can be selected using a selection box at the left (in case multiple objects need to be copied or moved for example) or by clicking on a selection popup to an operation on a single object. All operations need to be confirmed before proceeding, this should avoid accidental mistakes. The file manager gives direct access to all resources on the array and pools and should be handled with care. Below two screenshots to give a first impression. Once released more info will be given in the plugins section.
    14 points
  10. Has the plan for VM snapshots gone away?
    14 points
  11. Hey Unraiders, All of us at Lime Tech are very pleased to announce the hiring of @elibosley as a Staff Engineer. Eli has a diverse skill set and will be working on a variety of projects for us. Most notably, he'll be working on the My Servers team in the backend. Eli has been an avid Unraid user for years and he can’t wait to start building new features for the OS. In his free time he likes to drive his Veloster N in Autocross events, explore caves, and play video games like League of Legends. Here's Eli's full bio: Please help give Eli a warm Unraid welcome!
    12 points
  12. In this blog series, we want to put a spotlight on key community members to get to know them a little better and recognize them for all that they've done for the Unraid community over the years. This next blog features two outstanding community members who have helped out countless new Unraid users over the years: @mgutt and @ChatNoir https://unraid.net/blog/rockstars-mgutt-chatnoir
    12 points
  13. That would be very nice if Unraid would support snapshots for VMs. I would prefer this feature above all others.
    12 points
  14. Together with the release of Unraid 6.10-rc3 comes a new plugin: Dynamix File Manager (DFM). DFM is an extension to the built-in file browser functionality of the GUI and allows the user to do file management on the array. File operations include copy, move, delete, rename, upload and download, while it is also possible to change owner and permissions of files and folders. The attached User Guide gives further explanation of how to use the Dynamix File Manager. DFM is a work in progress and overtime additional features and functionality will be added. Please post your ideas about possible new features and enhancements in this topic. I hope this new plugin is a useful addition and make it easier for users to manage their array content. You can install this plugin using the Apps function of CA. Dynamix File Manager.pdf
    11 points
  15. Application Name: Steam (Headless) Application Site: https://store.steampowered.com/ Docker Hub: https://hub.docker.com/r/josh5/steam-headless/ Github: https://github.com/Josh5/docker-steam-headless/ Discord: https://unmanic.app/discord (Not just for Unmanic...) Description: Play your games in the browser with audio. Connect another device and use it with Steam Remote Play. Features: NVIDIA GPU support AMD GPU support Full video/audio noVNC web access to a Desktop Root access SSH server for remote terminal access Notes: ADDITIONAL SOFTWARE: If you wish to install additional applications, you can generate a script inside the "~/init.d" directory ending with ".sh". This will be executed on the container startup. STORAGE PATHS: Everything that you wish to save in this container should be stored in the home directory or a docker container mount that you have specified. All files that are store outside your home directory are not persistent and will be wiped if there is an update of the container or you change something in the template. GAMES LIBRARY: It is recommended that you mount your games library to `/games` and configure Steam to add that path. AUTO START APPLICATIONS: In this container, Steam is configured to automatically start. If you wish to add additional services to automatically start, add them under Applications > Settings > Session and Startup in the WebUI. NETWORK MODE: If you want to use the container as a Steam Remote Play (previously "In Home Streaming") host device you should set the Network Type: to "host". This is a requirement for controller hardware to work and to prevent traffic being routed through the internet since Steam thinks you are on a different network. Setup Guide: CONTAINER TEMPLATE: Navigate to "APPS" tab. Search for "steam-headless" Select either Install or Actions > Install from the search result. Configure the template as required. GPU CONFIGURATION (NVIDIA): This container can use your GPU. In order for it to do this you need to have the NVIDIA plugin installed. Install the Nvidia-Driver Plugin by @ich777. This will maintain an up-to-date NVIDIA driver installation on your Unraid server. Toggle the steam-headless Docker Container template editor to "Advanced View". In the "Extra Parameters" field, ensure that you have the "--runtime=nvidia" parameter added. (Optional - This step is only necessary if you only have a single GPU, then leaving this as "all" is fine.) Expand the Show more settings... section near the bottom of the template. In the Nvidia GPU UUID: (NVIDIA_VISIBLE_DEVICES) variable, copy your GPU UUID (can be found in the Unraid Nvidia Plugin. See that forum thread for details). GPU CONFIGURATION (AMD): Install the Radeon-Top Plugin by @ich777. Profit ADDING CONTROLLER SUPPORT: Unraid's Linux kernel by default does not have the modules required to support controller input. Steam requires these modules to be able to create the virtual "Steam Input Gamepad Emulation" device that it can then map buttons to. @ich777 Has kindly offered to build and maintain the required modules for the Unraid kernel as he already has a CI/CD pipeline in place and a small number of other kernel modules that he is maintaining for other projects. So a big thanks to him for that! Install the uinput plugin from the Apps tab. The container will not be able to receive kernel events from the host unless the Network Type: is set to "host". Ensure that you container is configured this way. WARNING: Be aware that this container requires at least 8083, 32123, and 2222 available for the WebUI, Web Audio, and SSH to work. It will also require any ports that Steam requires for Steam Remote Play No server restart is required, however. Ensure that the steam-headless Docker container is recreated after installing the uinput plugin for it to be able to detect the newly added module.
    11 points
  16. Hello all, We wanted to do something special for all of our current users so, all through July we're running an Unraid Pro upgrade sale! Enjoy 30% off Unraid Pro upgrades: Basic license to Pro: $79 Now $55.30 USD Plus license to Pro: $49 Now $34.30 USD Take 30% off at checkout with the coupon code: PRO30 Full details and instructions ⤵️
    11 points
  17. After multiple recent support issues with SanDisk brand USBs, we don't recommend buying SanDisk USBs for Unraid at this point. Either due to counterfeit devices being sold or a manufacturing change directly from SanDisk, multiple users have attempted to boot SanDisk USBs and found out that they do not register a unique GUID and therefore, cannot be properly licensed with Unraid. Multiple attempts at contacting SanDisk on this issue have gone nowhere. For a great rundown on the best USBs for Unraid, @SpaceInvaderOne made an exhaustively researched video on the topic: (Spoiler) The best 3 flash drives were: 1. Samsung bar plus USA ---- https://amzn.to/32TtQyp UK ---- https://amzn.to/3004ooU DE --- https://www.amazon.de/Samsung-MUF-32BE4-EU-Flash-Speicherstick/dp/B07CVVHCTG/ 2. Kingston DataTraveler SE9 G2 USA ---- https://amzn.to/30NhzIZ UK ---- https://amzn.to/3f4Bp7C DE --- https://www.amazon.de/Kingston-DataTraveler-USB-Stick-USB3-2-32GB/dp/B08KHTRF61?th=1 3. Samsung Fit Plus USA --- https://amzn.to/3hFboha UK --- https://amzn.to/39vSsOR DE --- https://www.amazon.de/Samsung-Flash-Drive-MUF-32AB-APC/dp/B07HPWKS3C BONUS @ich777 recommendation for Amazon.de users: https://www.amazon.de/Transcend-JetFlash-Extreme-Speed-32GB-USB-Stick/dp/B002WE6CN6
    11 points
  18. Woohoo! Thank you to the entire community for the bug reports, suggestions, and overall positive vibes along the way. The Unraid 6.10.0 Release Blog with all of the major highlights is here:
    11 points
  19. I am in the process of remaking Macinabox & adding some new features and hope to have it finished by next weekend. I am sorry for the lack of updates recently on this container. Thankyou @ghost82 for all you have done in answering questions here and on github and sorry i havent reached out to you before.
    11 points
  20. Hallo zusammen, ich weiß, dass es hier eventuell nicht hingehört, aber ich möchte mich bei allen aktiven Mitgliedern dieses Forums bedanken. Ihr habt mir alle den Einstieg in die unRAID-Welt sehr erleichert. Ich war seit einigen Wochen stiller Mitleser und habe mir mein System zusammengebaut. Seien es die Hardware-Ratschläge oder die Lösungen zu kleinen/großen Problemen. Als Neuling, wie ich einer bin, konnte ich mir bis jetzt hier im Forum zu jeder Frage eine Antwort erlesen. Klasse! Mein System läuft und ich fange schon jetzt an mehr damit machen zu wollen 🙈 Vielen, lieben Dank! 👏 Wünsche euch allen nur das Beste.
    11 points
  21. Compose Manager Beta Release! This plugin installs docker compose and compose switch. Use "docker compose" or "docker-compose" from the command line. See https://docs.docker.com/compose/cli-command/ for additional details. Install via Community Applications This plugin now adds a very basic control tab for compose on the docker page. The controls allow the creation of compose yaml files, as well as allowing you to issue compose up and compose down commands from the web ui. The web ui components are based heavily on the user.scripts plugin so a huge shoutout and thanks goes to @Squid. Future Work: Add scheduling for compose commands? Allow more options for configuring compose commands.
    10 points
  22. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples: 6 ports: Asmedia ASM1166 (PCIe 3.0 x4 physical, x2 electrical) These exist with both x4 (x2 electrical) and x1 PCIe interface, for some use cases the PCIe x1 may be a good option, i.e., if you don't have larger slots available, though bandwidth will be limited: 8 ports: any LSI with a SAS2008/2308/3008/3408/3808 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, 9500-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed (most of these require a x8 or x16 slot, older models like the 9201-8i and 9211-8i are PCIe 2.0, newer models like the 9207-8i, 9300-8i and newer are PCIe 3.0) For these and when not using a backplane you need SAS to SATA breakout cables, SFF-8087 to SATA for SAS2 models: SFF-8643 to SATA for SAS3 models: Keep in mind that they need to be forward breakout cables (reverse breakout look the same but won't work, as the name implies they work for the reverse, SATA goes on the board/HBA and the miniSAS on a backplane), sometimes they are also called Mini SAS (SFF-8xxx Host) to 4X SATA (Target), this is the same as forward breakout. If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander. P.S. Avoid SATA port multipliers with Unraid, also avoid any Marvell controller. For some performance numbers on most of these see below:
    10 points
  23. OK. Not that I use this plugin, but I have forked it so that the logs and the context menus appear. I have temporarily removed the dashboard part of things until I get more time to look at it's display aberrations. Going forward, so long as explicit directions are made as to how to replicate issues on 6.10 I will attempt to fix, but no guarantees will be made, and my goal here is not to continue development on this plugin, but to simply keep it in the same rough state as it was left in when the author stopped development. You will need to uninstall the version from Guild Darts and then reinstall the forked version. I would suggest making a backup of the folder.json file within /config/plugins/docker.folder on the flash drive. If you don't do this, then you will need to recreate your folders.
    10 points
  24. Hi folks, after spending a fair bit of time hardening my SMB configuration I figured I'd write a quick guide on what I consider the best settings for the security of an SMB server running on Unraid 6.9.2. First, before we get into SMB settings, you may also want to consider hardening the data while it is at rest by specifying an encrypted file-system type for your array (although this isn't a share specific option). For SMB, first set the SMB settings available: I've settled on this as the following block is what I consider to be a hardened SMB configuration for a standalone server that is not domain joined or using Kerberos authentication: server min protocol = SMB3_11 client ipc min protocol = SMB3_11 client signing = mandatory server signing = mandatory client ipc signing = mandatory client NTLMv2 auth = yes smb encrypt = required restrict anonymous = 2 null passwords = No raw NTLMv2 auth = no This configuration block is to be entered into the SMB extras configuration section of the SMB settings page. These settings will break compatibility with legacy clients, but when I say legacy I'm talking like Windows Server 2003/XP. Windows 10+ clients should work without issue as they all support (but are not necessarily configured to REQUIRE) these security features. These settings force the following security options: All communications must occur via SMB v3.1.1 All communications force the use of signing for communications NTLMv2 authentication is required, LanMan authentication is implicitly disabled. All communications must be encrypted Anonymous access is disabled Null session access is disabled NTLMSSP is required for all NTLMv2 authentication attempts In addition, the following security settings are configured for each available share: Also ensure that you create a non-root user to access the shares with and that all accounts use strong passwords (Ideally 12+ complex characters). Finally, a couple of things to note: If you read the release notes for Unraid 6.9.2, you'll see that Unraid uses samba: version 4.12.14. This is extremely important. If you, like me, google SMB configuration settings you'll eventually come across the documentation for the current version of SMB. But! Unraid is not running the latest version, and that's extremely important. The correct documentation to follow is for the 4.12 branch of Samba and the configuration options are significantly different, enough that a valid config for 4.15 will not work for 4.12. With "null passwords = No" you must enable Secure or Private security modes on each exported Unraid share - guest access won't work. There is currently no way to add per-share custom smb.conf settings. So either the server gets hardened or it does not. Do not apply a [share_name] tag as it will not work. It is not possible to specify `client smb3 encryption algorithms` in version 4.12.x of Samba. Kerberos authentication and domain authentication may be preferable in other circumstances, in this instance, additional hardening options may be considered. If you, like me, use VLC media player on mobile devices, you may find that SMBv3 with encryption makes the host inaccessible on IOS devices. The VLC team is aware of this and there is a fix available if you have the bleeding edge/development version of the app, but not if you download the current store version (last I checked, the fix hadn't been released). Should work fine with Android/Windows VLC. If you have any suggestions for other options that I have not included here or that you think are a mistake. Please let me know and I'd be most happy to look into them and adjust. Some other quick hardening suggestions for unraid hardening in general. Disable whatever services you don't need. In my case, that means I: Disable NFS Disable FTP Disable 'Start APC UPS daemon' If you enable Syslog, also enable NTP and configure it. Disable Docker Quick note on docker, having the services enabled allows for 'ip forwarding' which could, in theory, be used to route traffic via the host to bypass firewall rules (depending on your network toplogy obviously) Hope that helps someone else out there. Cheers!
    10 points
  25. Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overhaul from the plugin. When you update the plugin from v1.2.2 to v2.0.0 the plugin will delete the "old" package for ZFS and pull down the new ZFS package (about 45MB). Please wait until the download is finished and the "DONE" button is displayed, please don't click the red "X" button! After it finishes you can use your Server and ZFS as usual and you don't need to take any further steps like rebooting or anything else. The new version from the plugin also includes the Plugin Update Helper which will download packages for plugins before you reboot when you are upgrading your unRAID version and will notify you when it's safe to reboot: The new version from the plugin now will also check on each reboot if there is a newer version for ZFS available, download it and install it (the update check is by default activated). If you want to disable this feature simply run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have disabled this feature already and you want to enable it run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature needs an active internet connection on boot. If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS. It is now also possible to install unstable packages from ZFS if unstable packages are available (this is turned off by default). If you want to enable this feature simply run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have enabled this feature already and you want to disable it run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended). Please also keep in mind that for every new unRAID version ZFS has to be compiled. I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process. Currently the process is fully automated for all plugins who need packages for each individual Kernel version. The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation. If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.
    10 points
  26. Per @Squid, @luxinliang was lucky visitor # 1 MILLION to the Community Apps Plug-in thread! Wow!
    10 points
  27. Hi All, Just want to share out my findings about unRAID notification. My notification settings are based on Gmail. This how-to will enable the user to send email notification from Gmail to Yahoo email. If you like my how-to, then make it a sticky. Thank you.🙂 ======================================================================== Requirements: A) Setup a gmail account. This account will be the SENDER's email address << Assumption: you have setup 2-step authentication via you mobile phone for logging into your gmail account >> B) Setup a second gmail or any other free webmail account. eg: xyz@yahoo.com This account will be the RECEIVER's email address ======================================================================== You need to set up google App Password. 1) login into: accounts.google.com 2) Go to "Security" on your left section. 3) Under the heading: "Signing in to Google" 3.1) Click on App passwords 3.2) Sign in your normal gmail accounts 3.3) click: Select app, then select: Mail 3.4) click: Select device, then select: Custom 3.5) Give a name for the unRAID server e.g: midtowerunraid 3.6) Press Generate button 3.7) A window will pop out and app password for the device is display in the yellow box. Copy the password and keep in a safe place and save in notepad. This password is 16 character long. Next click the button: Done e.g: sskwowcomemtyufg <----- 16 character long app password. 3.8) Finally sign out all accounts Follow the steps below, to complete SMTP settings within unRAID server
    9 points
  28. In this blog series, we want to put a spotlight on key community members to get to know them a little better and recognize them for all that they've done for the Unraid community over the years. This next blog features two outstanding community members who have helped out countless new Unraid users over the years: @JorgeB and @ich777 https://unraid.net/blog/rockstars-jorgeb-ich777 If either have helped you out over the years here, please consider buying them a beer or lunch! JorgeB ich777
    9 points
  29. Hey Unraid Community! For the first time ever, we're running a Cyber Monday Sale: 20% off Unraid Pro and Pro Upgrades! If you're planning a new build soon or want to purchase a key for a friend or family member, do it this Monday, 11/29/21- 24 hours only from 12:01-11:59 PST! No server installation required for purchase. For full details, head over to unraid.net/cybermonday
    9 points
  30. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity/read check speed using a fast SSD only array with Unraid V6 Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s Asmedia ASM1064 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40156 and other similar cards 2 x 450MB/s 3 x 300MB/s 4 x 225MB/s Asmedia ASM1164 PCIe gen3 x2 (1970MB/s) - NOTE - not actually tested, performance inferred from the ASM1166 with up to 4 devices 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 and 6 Port Controllers JMicron JMB585 PCIe gen3 x2 (1970MB/s) - 2w - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s Asmedia ASM1166 PCIe gen3 x2 (1970MB/s) - 2w 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 x 355MB/s 6 x 300MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) * PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9211-8i PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset 4 x 565MB/s 6 x 465MB/s 8 x 330MB/s (190MB/s*, 185MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 565MB/s LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 565MB/s (425MB/s*, 380MB/s**) * PCIe gen3 x4 (3940MB/s) ** PCIe gen2 x8 (4000MB/s) SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link with LSI 9211-8i (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link with LSI 9211-8i (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® SAS2 Expander RES2SV240 - 10w Single Link with LSI 9211-8i (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. Dual Link with LSI 9211-8i (4000MB/s) 12 x 235MB/s 16 x 185MB/s Dual Link with LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link with LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 500MB/s 12 x 340MB/s Dual Link with LSI 9300-8i (*) 10 x 510MB/s 12 x 460MB/s * tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can closer to SAS3 speeds, with SAS3 devices limit here would be the PCIe link, which should be around 6600-7000MB/s usable. HP 12G SAS3 EXPANDER (761879-001) Single Link with LSI 9300-8i (2400MB/s*) 8 x 270MB/s 12 x 180MB/s 16 x 135MB/s 20 x 110MB/s 24 x 90MB/s Dual Link with LSI 9300-8i (4800MB/s*) 10 x 420MB/s 12 x 360MB/s 16 x 270MB/s 20 x 220MB/s 24 x 180MB/s * tested with SATA3 devices, no Databolt or equivalent technology, at least not with an LSI HBA, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Intel® SAS3 Expander RES3TV360 Single Link with LSI 9308-8i (*) 8 x 490MB/s 12 x 330MB/s 16 x 245MB/s 20 x 170MB/s 24 x 130MB/s 28 x 105MB/s Dual Link with LSI 9308-8i (*) 12 x 505MB/s 16 x 380MB/s 20 x 300MB/s 24 x 230MB/s 28 x 195MB/s * tested with SATA3 devices, PMC expander chip includes similar functionality to LSI's Databolt, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Note: these results were after updating the expander firmware to latest available at this time (B057), it was noticeably slower with the older firmware that came with it. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0** , Skylake, Kaby Lake, Coffee Lake, Comet Lake and Alder Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. **Except low end H110 and H310 chipsets which are only DMI 2.0, Z690 is DMI 4.0 and not yet tested by me, but except same result as the other Alder Lake chipsets. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
    9 points
  31. No crypto currency? That one will be a global payment system.
    9 points
  32. Welche Docker Netzwerke können andere erreichen? Ich habe mir mal die Mühe gemacht und gefühlt 100 Container mit unterschiedlichen Netzwerken gestartet und dann mit "curl <IP>:<Port>" versucht die jeweils anderen Container, Unraid, den Router oder das Internet zu erreichen. Die folgende Tabelle zeigt das Ergebnis, was allerdings nur gilt, wenn "Host to Custom Access" in den Docker Einstellungen deaktiviert wurde. Eventuell finde ich auch noch die Zeit die anderen Varianten auszuprobieren. Source = Der Container von dem die Anfrage kommt Target = Der Container / das Ziel an die die Anfrage ging Denkt dran, dass man Container mit dem Befehl "docker network connect <Netzwerkname> <ContainerName>" mit Netzwerken verbinden kann. Dadurch können manche Einschränkungen also wieder aufgehoben werden.
    9 points
  33. To utilize your Nvidia graphics card in your Docker container(s) the basic steps are: Add '--runtime=nvidia' in your Docker template in 'Extra Parameters' (you have to enable 'Advanced view' in the template to see this option) Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID' (like 'GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd') Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all' Make sure to enable hardware transcoding in the application/container itself See the detailed instructions below for Emby, Jellyfin & Plex (alphabetical order). UUID: You can get the UUID of you graphics card in the Nvidia-Driver Plugin itself PLUGINS -> Nvidia-Driver (please make sure if there is no leading space!) : NOTE: You can use one card for more than one Container at the same time - depending on the capabilities of your card. Emby: Note: To enable Hardware Encoding you need a valid Premium Subscription otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text NVENC/DEC is indicating exactly that) : Jellyfin: Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (Jellyfin doesn't display if it's actually transcoding with the graphics card at time of writing but you can also open up a Unraid terminal and type in 'watch nvidia-smi' then you will see at the bottom that Jellyfin is using your card) : PLEX: (thanks to @cybrnook & @satchafunkilus that granted permission to use their screenshots) Note: To enable Hardware Encoding you need a valid Plex Pass otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself: After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text '(hw)' at Video is indicating exactly that):
    9 points
  34. By this guide Plex uses your RAM while transcoding which prevents wearing out your SSD. Edit the Plex Container and enable the "Advanced View": Add this to "Extra Parameters" and hit "Apply": --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 Result: Side note: If you dislike permanent writes to your SSD add " --no-healthcheck ", too. Now open Plex -> Settings -> Transcoder and change the path to "/tmp": If you like to verify it's working, you can open the Plex containers Console: Now enter this command while a transcoding is running: df -h Transcoding to RAM-Disk works if "Use%" of /tmp is not "0%": Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 193M 3.7G 5% /tmp After some time it fills up to nearly 100%: tmpfs 3.8G 3.7G 164M 97% /tmp And then Plex purges the folder automatically: tmpfs 3.8G 1.3G 3.5G 33% /tmp If you stop the movie Plex will delete everything: tmpfs 3.8G 3.8G 0 0% /tmp By this method Plex never uses more than 4GB RAM, which is important, as fully utilizing your RAM can cause an unexpected server behaviour.
    9 points
  35. Posting this here in the hope that it assists someone in the future. I host my instance of HomeAssistant in a VM on unRAID. I have recently purchased a ConBee II USB-Gateway so I can add Zigbee devices. I added the USB using the unRAID VM GUI, like I imagine most would, by just checking the tick box next to the device. This didn't work. While Home Assistant found the device, the integration would not add (there were communication errors). The trick was to add the device as a serial-usb device. AFAIK you cannot do this via the GUI. So I added the following code to my VM config: <serial type='dev'> <source path='/dev/serial/by-id/<yourusbid>'/> <target type='usb-serial' port='1'> <model name='usb-serial'/> </target> <alias name='serial1'/> <address type='usb' bus='0' port='4'/> </serial> I was then able to add the integration easily. Interestingly, it didn't auto discover, but that's just an aside. Note, <yourusbid> can be found via the command line - it contains the device serial so its not to be posted.
    8 points
  36. Super happy to be here! Thanks for the shout out!
    8 points
  37. Awesome work by the team and just reiterating what @SpencerJ said above: THANK YOU to everyone who has helped test and report bugs for us! You guys are all rockstars!!
    8 points
  38. The newest release 4.1 and 4.0 added influxdb among other changes and the container template on the community applications page needs to be updated accordingly. You can change the repository field to "lscr.io/linuxserver/scrutiny:8e34ef8d-ls35" until it gets updated.
    8 points
  39. One of the main reasons why the Unraid community is so great is due to our many Community Rockstars who go above and beyond to help others out. 🤘 In this blog series, we want to put a spotlight on key community members to get to know them a little better and recognize them for all that they've done for the Unraid community over the years. For this inaugural blog, we begin with two outstanding forum Moderators, @trurl and @JonathanM, who have helped out countless new Unraid users over the years: https://unraid.net/blog/rockstars-trurl-jonathanm If either have helped you out over the years here, please consider buying them a beer or lunch! trurl JonathanM
    8 points
  40. I just wanted to add a follow up for this in case anyone is in my same scenario. I was confused when you made this comment specifically when you said "the plugin still operates as is with its default settings" because the only thing that was working for me was the separation into folders. None of the context menu items were working for my on any of the icons. It turns out the actual source of the problem is in the "Preview advanced context menu" option for the docker folders. It turns out this is the feature where the main bug is. After turning this off (I had it on for all folders because it's pretty useful), the typical context menu features started working again. This brings back the major useful functionality of the plugin and I can again navigate my dockers without pulling my hair out. Thanks.
    8 points
  41. Due to various continual issues with this plugin, it has now been marked as being incompatible with Unraid versions 6.9.0+ It is highly advised to uninstall this plugin (and Statistics Sender if installed) and switch to the Unassigned Devices Preclear plugin (or Binhex-Preclear if you prefer a command line interface)
    8 points
  42. Should be ready I think for wednesday this week
    8 points
  43. Hier ein kurzer Guide zu einem User Script um von einem Pi eine Sicherungsimage zu erstellen und zu verkleinern. Da ich selbst recht neu mit Unraid unterwegs bin und total unerfahren im Bereich Linux, Termina, Befehle, etc... bitte ich Fehler zu entschuldigen und gerne zu Verbessern und/oder brauchbares zu ergänzen. Als Basis habe ich das Script von Lukas Knöller - hobbyblogging.de genommen und um ein paar Variablen und PiShrink ergänzt. Das Script verbindet sich per SSH auf den Pi, erstellt ein Image davon und legt es im Backup Share ab. Danach werden überfällige Backups gelöscht und das erstellte Backup verkleinert. -Backup Share als Ziel einrichten. Falls mehrere Pi's gesichert werden sollen, empfehle ich für jeden Pi einen eigenen Unterordner im Share zu erstellen. -Unterordner im Share erstellen: Im Unraid Terminal folgenden Befehl ausführen mkdir -p /mnt/user/DEIN-BACKUP-SHARE/PI-UNTERORDNER -sshpass downloaden, /extra/ Ordner auf dem Stick erstellen, sshpass in den Ordner /extra/ verschieben und sshpass installieren: Im Unraid Terminal folgenden Code ausführen. wget https://packages.slackonly.com/pub/packages/14.2-x86_64/network/sshpass/sshpass-1.06-x86_64-1_slonly.txz && mkdir /boot/extra && mv sshpass-1.06-x86_64-1_slonly.txz /boot/extra/ && installpkg /boot/extra/sshpass-1.06-x86_64-1_slonly.txz Alternative, falls die Quelle nicht erreichbar ist: Dadruch dass wir das sshpass im /extra/ Ordner des Unraid Sticks liegen haben, wird sshpass mit jedem Unraid Start installiert. Wer das "Fix Common Problems" Plugin nutzt, wird nun eine Meldung bekommen, diese kann man mit dem Button rechts ignorieren: -PiShrink download, verschieben nach /mnt/user/appdata/, ausführbar machen: Im Unraid Terminal folgenden Code ausführen. wget -O /mnt/user/appdata/pishrink.sh https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh && chmod +x /mnt/user/appdata/pishrink.sh -User Script Plugin installieren: -User Script Plugin aufrufen und neue User Script erstellen: -Script ins leere Feld kopieren und Variablen anpassen: #!/bin/bash #Variablen PI_IP="XXX.XXX.XXX.XXX" SSH_USER="PI-USER" SSH_PW="DEIN-SUPER-PASSWORT-VOM-PI-USER" BACKUP_PFAD="/mnt/user/DEIN-BACKUP-SHARE/PI-UNTERORDNER" #ohne / am Ende BACKUP_ANZAHL="5" BACKUP_NAME="pi_image" SHRINK_SCRIPT_PFAD="/mnt/user/appdata/pishrink.sh" DATUM="$(date +%Y%m%d)" #Backup erstellen sshpass -p ${SSH_PW} ssh ${SSH_USER}@${PI_IP} sudo "dd if=/dev/mmcblk0" | dd of=${BACKUP_PFAD}/${BACKUP_NAME}-${DATUM}.img bs=1MB #Alte Sicherung löschen pushd ${BACKUP_PFAD}; ls -tr ${BACKUP_PFAD}/${BACKUP_NAME}* | head -n -${BACKUP_ANZAHL} | xargs rm; popd sync -f ${BACKUP_PFAD} #shrink ${SHRINK_SCRIPT_PFAD} ${BACKUP_PFAD}/${BACKUP_NAME}-${DATUM}.img Im falle von Raspberry Pi OS sollte der User "Pi" sein. Das Passwort habt ihr bei der Installation von Raspberry Pi OS selbst festgelegt. Am Ende der Variablen BACKUP_PFAD darf kein "/" gesetzte sein, da dieses schon im Code enthalten ist. Solltet Ihr "root" nutzen, oder das Script aufgrund von sporadischen erneutem Abfragen des Passwortes abbrechen, haben @dan4UR und @Anym001 vielleicht die Lösung für euch: root User Erneute Passwortabfrage bei sudo -Script mit dem "SAVE CHANGES" Button abspeichern. -Cron anpassen: in meinem Beispiel läuft das Script zur 15. Minute in der 23. Stunde am 11. und 26. Tages jeden Monat, an egal welchem Wochentag. Oder kurz: jeden 11. und 26. um 23:15Uhr. Hilfe zu Cron: https://crontab.guru/ -Einstellungen unten mit dem Button "APPLY" sichern. Wie Anfangs schon erwähnt, bin ich nicht sehr Erfahren und bitte um Rücksicht bei Fehlern. Danke auch an @ich777, @alturismo und @Anym001 für Idee, Ratschläge, Testen und Wissen. Ansonsten, Happy Backup! *CHANGELOG* 25.08.2021 20:20 - Variable DATE hinzugefügt 30.11.2021 09:35 - Troubleshooting Ergänzung für root und sudo 26.12.2021 14:22 - Alternative Quelle für sshpass. Danke @mgutt
    8 points
  44. Currently working on a TPM emulator plugin for unRAID.
    8 points
  45. Or how often people don't read the FAQ's before posting.
    8 points
  46. I did this, everything worked again. Then I updated to linuxserver/mariadb:latest and everything continued to work, but with an Alpine base. Saved me from migrating, I guess I'll sit on my backups and see how this plays out.
    8 points
  47. People with specific Seagate Ironwolf disks on LSI controllers have been having issues with Unraid 6.9.0 and 6.9.1. Typically when spinning up the drive could drop off the system. Getting it back on would require checking, unassigning, reassigning and rebuilding its contents (about 24 hours). It happened to me three times in a week across two of my four affected drives. The drive in question is the 8TB Ironwolf ST8000VN004, although 10TB has been mentioned, so it may affect several. There have been various comments and suggestions over the threads, and it appears that there is a workaround solution. The workaround is reversible, so if an official fix comes along you can revert your settings back. This thread is here to consolidate the great advice given by @TDD, @SimonF, @JorgeB and others to hopefully make it easier for people to follow. This thread is also here to hopefully provide a central place for those with the same hardware combo to track developments. NOTE: Carry out these steps at your own risk. Whilst I will list each step I did and it's all possible within Unraid, it's your data. Read through, and only carry anything out if you feel comfortable. I'm far from an expert - I'm just consolidating valuable information scattered - if this is doing more harm than good, or is repeated elsewhere, then close this off. The solution involves making changes to the settings of the Ironwolf disk. This is done by running some Seagate command line utilities (SeaChest) explained by @TDD here The changes we will be making are Disable EPC Disable Low Current Spinup (not confirmed if this is required) The Seagate utilities refer to disks slightly differently than Unraid, but there is a way to translate one to the other, explained by @SimonF here I have carried out these steps and it looks to have solved the issue for me. I've therefore listed them below in case it helps anybody. It is nowhere near as long-winded as it looks - I've just listed literally every step. Note that I am not really a Linux person, so getting the Seagate utilities onto Unraid might look like a right kludge. If there's a better way, let me know. All work is carried out on a Windows machine. I use Notepad to help me prepare commands beforehand, I can construct each command first, then copy and paste it into the terminal. If you have the option, make these changes before upgrading Unraid... Part 1: Identify the disk(s) you need to work on EDIT: See the end of this part for an alternate method of identifying the disks 1. Go down your drives list on the Unraid main tab. Note down the part in brackets next to any relevant disk (eg, sdg, sdaa, sdac, sdad) 2. Open up a Terminal window from the header bar in Unraid 3. Type the following command and press enter. This will give you a list of all drives with their sg and sd reference sg_map 4. Note down the sg reference of each drive you identified in step 1 (eg, sdg=sg6, sdaa=sg26, etc.) There is a second way to get the disk references which you may prefer. It uses SeaChest, so needs carrying out after Part 2 (below). @TDD explains it in this post here... Part 2: Get SeaChest onto Unraid NOTE: I copied SeaChest onto my Flash drive, and then into the tmp folder. There's probably a better way of doing this EDIT: Since writing this the zip file to download has changed its structure, I've updated the instructions to match the new download. 5. Open your flash drive from Windows (eg \\tower\flash), create a folder called "seachest" and enter it 6. Go to https://www.seagate.com/gb/en/support/software/seachest/ and download "SeaChest Utilities" 7. Open the downloaded zip file and navigate to Linux\Lin64\ubuntu-20.04_x86_64\ (when this guide was written, it was just "Linux\Lin64". The naming of the ubuntu folder may change in future downloads) 8. Copy all files from there to the seachest folder on your flash drive Now we need to move the seachest folder to /tmp. I used mc, but many will just copy over with a command. The rest of this part takes place in the Terminal window opened in step 2... 9. Open Midnight Commander by typing "mc" 10. Using arrows and enter, click the ".." entry on the left side 11. Using arrows and enter, click the "/boot" folder 12. Tab to switch to the right panel, use arrows and enter to click the ".." 13. Using arrows and enter, click the "/tmp" folder 14. Tab back to the left panel and press F6 and enter to move the seachest folder into tmp 15. F10 to exit Midnight Commander Finally, we need to change to the seachest folder on /tmp and make these utilities executable... 16. Enter the following commands... cd /tmp/seachest ...to change to your new seachest folder, and... chmod +x SeaChest_* ...to make the files executable. Part 3: Making the changes to your Seagate drive(s) EDIT: When this guide was written, there was what looked like a version number at the end of each file, represented by XXXX below. Now each file has "_x86_64-linux-gnu" so where it mentions XXXX you need to replace with that. This is all done in the Terminal window. The commands here have two things that may be different on your setup - the version of SeaChest downloaded (XXXX) and the drive you're working on (YY). This is where Notepad comes in handy - plan out all required commands first 17. Get the info about a drive... SeaChest_Info_XXXX -d /dev/sgYY -i ...in my case (as an example) "SeaChest_Info_150_11923_64 -d /dev/sg6 -i" You should notice that EPC has "enabled" next to it and Low Current Spinup is enabled 18. Disable EPC... SeaChest_PowerControl_XXXX -d /dev/sgYY --EPCfeature disable ...for example "SeaChest_PowerControl_1100_11923_64 -d /dev/sg6 --EPCfeature disable" 19. Repeat step 17 to confirm EPC is now disabled 20. Repeat steps 17-19 for any other disks you need to set 21. Disable Low Current Spinup...: SeaChest_Configure_XXXX -d /dev/sgYY --lowCurrentSpinup disable ...for example "SeaChest_Configure_1170_11923_64 -d /dev/sg6 --lowCurrentSpinup disable" It is not possible to check this without rebooting, but if you do not get any errors it's likely to be fine. 22. Repeat step 21 for any other disks You should now be good to go. Once this was done (took about 15 minutes) I rebooted and then upgraded from 6.8.3 to 6.9.1. It's been fine since when before I would get a drive drop off every few days. Make sure you have a full backup of 6.8.3, and don't make too many system changes for a while in case you need to roll back. Seachest will be removed when you reboot the system (as it's in /tmp). If you want to retain it on your boot drive, Copy to /tmp instead of moving it. You will need to copy it off /boot to run it each time, as you need to make it executable. Completely fine if you want to hold off for an official fix. I'm not so sure it will be a software fix though, since it affects these specific drives only. It may be a firmware update for the drive, which may just make similar changes to above. As an afterthought, looking through these Seagate utilities, it might be possible to write a user script to completely automate this. Another alternative is to boot onto a linux USB and run it outside of Unraid (would be more difficult to identify drives).
    7 points
  48. Ursachen Wer einen SSD Cache einsetzt, hat evtl schon festgestellt, dass er permanente Schreibzugriffe auf der SSD hat. Es gibt dafür verschiedene Ursachen: Write Amplification Das bedeutet es wird mehr auf das Laufwerk geschrieben als die Daten eigentlich groß sind und das liegt daran, dass der festgelegte Datenblock größer ist als die Daten selbst oder es werden je nach Dateisystem zusätzliche Informationen wie Prüfsummen, Metadaten oder Journaling gespeichert, die eine hohe Amplification resultieren können. Healthcheck Wenn ein Container diese Funktion besitzt, schreibt er alle 5 Sekunden (!) in die Dateien "/var/lib/docker/containers/*/hostconfig.json" und "/var/lib/docker/containers/*/config.v2.json". Diese ermöglicht den "healthy", "unhealthy" oder "health: starting" Status in der Docker Übersicht: Mir bekannte Container, die das nutzen sind zB Plex, Pi-Hole oder der Nginx Proxy Manager. Container Logs Über die Logs eines Containers können wir sehen was er gerade macht. Manche Container schreiben sehr viel in "/var/lib/docker/containers/*/*-json.log". Ich hatte zB schon eine 500MB große Log-Datei bei Plex Interne Logs oder temporäre Dateien Innerhalb eines Containers können ebenfalls Logs oder temporäre Dateien erstellt werden. zb schreibt der Nginx Proxy Manager im Pfad /var/logs ständig in Log-Dateien oder der Unifi-Controller ständig in /tmp temporäre Dateien. Mit dem folgenden "find" Kommando können wir uns alle Dateien ausgeben lassen, die durch Container oder dem Docker Dienst geschrieben werden: find /var/lib/docker -type f -not -path "*/diff*" -print0 | xargs -0 stat --format '%Y:%.19y %n' | sort -nr | cut -d: -f2- 2> /dev/null | head -n30 | sed -e 's|/merged|/...|; s|^[0-9-]* ||' Es listet die 30 zuletzt geänderten Dateien auf. Wiederholt man das Kommando, dann sieht man schnell welche Dateien (teilweise im Sekundentakt) immer wieder aktualisiert werden: 09:36:29 /var/lib/docker/containers/*/hostconfig.json 09:36:29 /var/lib/docker/containers/*/config.v2.json 09:12:45 /var/lib/docker/containers/*/*-json.log 09:12:34 /var/lib/docker/overlay2/*/merged/tmp/*.log 09:12:34 /var/lib/docker/overlay2/*/merged/var/log/*.log ... 09:12:34 /var/lib/docker/btrfs/subvolumes/*/var/log/*.log Gegenmaßnahmen Was können wir nun dagegen tun: Docker vom docker.img auf "Directory" umstellen (ja, "/docker/docker/" ist richtig!): Damit reduzieren wir schon mal die Write Amplification, da das Schreiben einer kleinen Zeile in eine log/json-Datei mehr Datenblöcke im docker.img ändern kann, als das beim direkten Zugriff auf die Datei der Fall ist. Hinweis: Dadurch werden alle Custom Networks und Container gelöscht. Diese können dann einfach (ohne Datenverlust) über Apps > Previous Apps wieder installiert werden (Custom Networks müssen natürlich vorher neu erstellt werden! Geht übrigens auch per Kommandozeile) Bonus: Wer den Share "system" beim Cache auf "Prefer" gestellt hat, der sollte auch gleich den Pfad auf /mnt/<cache-pool-name>/docker/docker ändern. Grund: Die CPU wird bei /mnt/user... mehr belastet. Optional: Ist das erledigt und setzt man nur eine SSD ein, sollte man außerdem über den Wechsel von BTRFS zu XFS nachdenken, da XFS eine deutlich kleinere Write Amplification besitzt. Das ginge so: Alle Shares beim Cache von "Prefer" auf "Yes" stellen, Docker & VM Service auf "No", Mover starten, SSD sichten ob sie wirklich leer ist, SSD umformatieren, alle geänderten Shares beim Cache zurück auf "Prefer", Mover starten, HDD sichten ob sie wirklich leer ist, Docker & VM Service auf "Yes". Wir ändern über die Go-Datei den Unraid Code, so dass automatisch eine RAM-Disk vom Pfad /var/lib/docker/containers erstellt wird, in dem die healthcheck-Dateien liegen. Alternativ: Bei allen Containern mit "healthy" status folgendes einstellen (siehe auch unten) Wir ändern über die Go-Datei den Unraid Code, so dass automatisch eine RAM-Disk vom Pfad /var/lib/docker/containers erstellt wird, in dem die Container Logdateien liegen. Alternativ: Bei allen Containern die Container Logs umlenken: oder deaktivieren: Was wann gebraucht wird, erfahrt ihr hier. Um die Aktivitäten innerhalb der Container ermitteln zu können, müssen wir wissen welcher Container in welchen Pfad schreibt. Das oben genannte "find" Kommando hat zB diesen Pfad ausgegeben: /var/lib/docker/overlay2/b04890a87507090b14875f716067feab13081dea9cf879aade865588f14cee67/merged/tmp/hsperfdata_abc/296 rgendein Container schreibt also in den Ordner "/b04890a875...". Mit diesem Kommando finden wir heraus welcher das ist: csv="CONTAINER;PATHS\n"; for f in /var/lib/docker/image/*/layerdb/mounts/*/mount-id; do subid=$(cat $f); idlong=$(dirname $f | xargs basename); id="$(echo $idlong | cut -c 1-12)"; name=$(docker ps --format "{{.Names}}" -f "id=$id"); [[ -z $name ]] && continue; csv+="\n"$(printf '=%.0s' {1..20})";"$(printf '=%.0s' {1..100})"\n"; [[ -n $name ]] && csv+="$name;" csv+="/var/lib/docker/(btrfs|overlay2).../$subid\n"; csv+="$id;"; csv+="/var/lib/docker/containers/$idlong\n"; for vol in $(docker inspect -f '{{ range .Mounts }}{{ if eq .Type "volume" }}{{ .Destination }}{{ printf ";" }}{{ .Source }}{{ end }}{{ end }}' $id); do csv+="$vol\n"; done; done; echo ""; echo -e $csv | column -t -s';'; echo ""; In meinem Fall ist das der "unifi-controller": Außerdem prüfen wir genau in welchen Unterordner der Container geschrieben hat: /merged/tmp/hsperfdata_abc/296 Das "/merged" (oder auch "/diff") kann man ignorieren. Der Pfad des Containers beginnt also mit "/tmp". Diesen Pfad können wir normalerweise bedenkenlos in eine RAM-Disk packen, da bei Debian, Ubuntu und Unraid das bereits von Haus aus eine RAM-Disk ist und Anwendungen in der Regel ja auch auf diesen Betriebssystemen laufen. Aber Achtung: Falls ihr einen anderen Pfad ermittelt habt, darf dieser in der Regel nicht in eine RAM-Disk verlagert werden, da dass zum Datenverlust im Container führen kann! Wir öffnen nun also die Einstellungen des ermittelten Containers, scrollen nach unten und wählen "Add another Path...": Nun tragen wir "/tmp" beim "Container Path" ein und "/tmp/<containername>" beim "Host Path": Sobald ihr nun den Container startet, schreibt der Container nicht mehr in "/var/lib/docker/.../tmp", also auf eure SSD, sondern in "/tmp/unifi-controller" und damit in die RAM-Disk von Unraid. Nachdem das getan ist, wiederholen wir das "find" Kommando erneut und vergleichen noch mal die Ergebnisse: 2021-08-18 10:06:36.256939658 +0200 /var/lib/docker/containers/*/hostconfig.json 2021-08-18 10:06:36.256939658 +0200 /var/lib/docker/containers/*/config.v2.json 2021-08-18 09:49:32.698781862 +0200 /var/lib/docker/containers/*/*-json.log Die Schreibzugriffe auf die "/tmp" Ordner sind nun alle verschwunden, da wir diese in die RAM-Disk von Unraid ausgelagert haben. Die Schreibzugriffe auf /var/lib/docker/containers sehen wir zwar noch (außer ihr verwendet die --no-healthcheck/--log-driver Lösung, dann werden auch diese verschwunden sein), können diese aber ignorieren, da wir dafür über die Code-Änderung eine RAM-Disk ergänzt haben. Zur Belohnung genießen wir dann mal keine Schreibzugriffe auf unserem Cache 😁 Pedanterie Eventuell stellt ihr fest, dass trotzdem noch Daten auf die SSD geschrieben werden und nun wollt ihr es genau wissen. In diesem Fall lohnt es sich das "find" Kommando so anzupassen, dass ihr euch die neuesten Dateien in einem "appdata" Share anzeigen lasst: find /mnt/cache/appdata -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n30 In meinem Fall kam das dabei heraus und ich habe auch gleich mal eingekreist, was man optimieren könnte (auf eigene Gefahr, versteht sich): Da der Nginx-Proxy-Manager bereits eine RAM-Disk für /tmp benötigte, habe ich dann entschieden auch /data/logs in eine RAM-Disk zu packen, so dass ich es schlussendlich so gelöst habe: Den Pfad vom Unifi-Controller habe ich gelassen, da der letzte Schreibzugriff bereits vor mehreren Minuten erfolgte, so dass ich keinen Verschleiß der SSD befürchte. Ich habe dann ein paar Minuten gewartet und das Kommando wiederholt und tatsächlich ist mir dann noch was aufgefallen: 1.) Der Zerotier Container schreibt alle 30 Sekunden in eine conf-Datei, was mich nervt, aber nachdem ich die Datei versucht habe zu öffnen und nur Binärdaten darin enthalten sind, lasse ich davon erstmal die Finger. 2.) Der Unifi-Controller schreibt ständig in haufenweise Datenbank-Dateien. Diese sind natürlich für den Betrieb des Containers notwendig, weshalb wir daran nichts ändern können. Da mich aber die Statistiken von Unifi eh nicht interessieren, lasse ich den Container wie gehabt abgeschaltet. Ich aktiviere den nur, wenn ich etwas an den Access-Points ändern möchte. Alternativ könnte man überlegen ein Script zu schreiben, dass /mnt/cache/appdata/unifi-controller in eine RAM-Disk packt und regelmäßig den Container stoppt, die RAM-Disk sichert und dann wieder startet. Den Aufwand spare ich mir aber. 3.) Der Plex-Container schreibt alle paar Minuten in eine Log-Datei, was ich aber akzeptabel finde.
    7 points
  49. Go to the Plugins tab and check for updates. You'll want to make sure you are running the latest version of the My Servers plugin, which is currently 2021.09.15.1853. If you are still having issues, open a webterminal and type: unraid-api restart
    7 points