Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/21/17 in all areas

  1. 94 points
    Those following the 6.9-beta releases have been witness to an unfolding schism, entirely of my own making, between myself and certain key Community Developers. To wit: in the last release, I built in some functionality that supplants a feature provided by, and long supported with a great deal of effort by @CHBMB with assistance from @bass_rock and probably others. Not only did I release this functionality without acknowledging those developers previous contributions, I didn't even give them notification such functionality was forthcoming. To top it off, I worked with another talented developer who assisted with integration of this feature into Unraid OS, but who was not involved in the original functionality spearheaded by @CHBMB. Right, this was pretty egregious and unthinking of me to do this and for that I deeply apologize for the offense. The developers involved may or may not accept my apology, but in either case, I hope they believe me when I say this offense was unintentional on my part. I was excited to finally get a feature built into the core product with what I thought was a fairly eloquent solution. A classic case of leaping before looking. I have always said that the true utility and value of Unraid OS lies with our great Community. We have tried very hard over the years to keep this a friendly and helpful place where users of all technical ability can get help and add value to the product. There are many other places on the Internet where people can argue and fight and get belittled, we've always wanted our Community to be different. To the extent that I myself have betrayed this basic tenant of the Community, again, I apologize and commit to making every effort to ensure our Developers are kept in the loop regarding the future technical direction of Unraid OS. sincerely, Tom Mortensen, aka @limetech
  2. 33 points
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  3. 30 points
    Welcome (again) to 6.9 release development! This release marks hopefully the last beta before moving to -rc phase. The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements. With that in mind, the obligatory disclaimer: Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! That said, here's what's new in this release... Multiple Pools This features permits you to define up to 35 named pools, of up to 30 storage devices/pool. The current "cache pool" is now simply a pool named "cache". Pools are created and managed via the Main page. Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to current cache pool operation. Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order: pool assigned to share disk1 : disk28 all the other pools in strverscmp() order. As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "unRAID array" pools. We are also considering zfs support. Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool. Now you have x2 1-device btrfs pools. Upon array Start user might understandably assume there are now x2 pools with exactly the same data. However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device. Language Translation A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI. There are several language packs now available, and several more in the works. Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language. Note: Community Applications HAS to be up to date to install languages. Versions of CA prior to 2020.05.12 will not even load on this release. As of this writing, the current version of CA is 2020.06.13a. See also here. Each language pack exists in public Unraid organization github repos. Interested users are encouraged to clone and issue Pull Requests to correct translations errors. Language translations and PR merging is managed by @SpencerJ. Linux Kernel Upgraded to 5.7. Unfortunately, none of the out-of-tree drivers compile with this kernel. In particular, these drivers are omitted: Highpoint RocketRaid r750 Highpoint RocketRaid rr3740a Tehuti Networks tn40xx If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers. Base Packages All updated to latest versions. In addition, Linux PAM has been integrated. This will permit us to install 2-factor authentication packages in a future release. Docker Updated to version 19.03.11 Also now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up. Virtualization libvirt updated to version 6.4.0 qemu updated to version 5.0.0 In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. Refer also @ljm42's excellent guide. In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS. The primary use case is to facilitate accelerated transcoding in docker containers. For this we require Linux to detect and auto-install the appropriate driver. However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page. Users passing GPU's to VM's are encouraged to set this up now. "unexpected GSO errors" If your system log is being flooded with errors such as: Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66 You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net". In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page. For other network configs it may be necessary to directly edit the xml. For example: <interface type='bridge'> <mac address='xx:xx:xx:xx:xx:xx'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> Other AFP support has been removed. Numerous other Unraid OS and webGUI bug fixes and improvements. Version 6.9.0-beta22 2020-06-16 Caution! This is beta sofware, consider using on test servers only. Base distro: aaa_base: version 14.2 aaa_elflibs: version 15.0 build 23 acl: version 2.2.53 acpid: version 2.0.32 apcupsd: version 3.14.14 at: version 3.2.1 attr: version 2.4.48 avahi: version 0.8 bash: version 5.0.017 beep: version 1.3 bin: version 11.1 bluez-firmware: version 1.2 bridge-utils: version 1.6 brotli: version 1.0.7 btrfs-progs: version 5.6.1 bzip2: version 1.0.8 ca-certificates: version 20191130 build 1 celt051: version 0.5.1.3 cifs-utils: version 6.10 coreutils: version 8.32 cpio: version 2.13 cpufrequtils: version 008 cryptsetup: version 2.3.3 curl: version 7.70.0 cyrus-sasl: version 2.1.27 db48: version 4.8.30 dbus: version 1.12.18 dcron: version 4.5 devs: version 2.3.1 build 25 dhcpcd: version 8.1.9 diffutils: version 3.7 dmidecode: version 3.2 dnsmasq: version 2.81 docker: version 19.03.11 dosfstools: version 4.1 e2fsprogs: version 1.45.6 ebtables: version 2.0.11 eject: version 2.1.5 elvis: version 2.2_0 etc: version 15.0 ethtool: version 5.7 eudev: version 3.2.5 file: version 5.38 findutils: version 4.7.0 flex: version 2.6.4 floppy: version 5.5 fontconfig: version 2.13.92 freetype: version 2.10.2 fuse3: version 3.9.1 gawk: version 4.2.1 gd: version 2.2.5 gdbm: version 1.18.1 genpower: version 1.0.5 getty-ps: version 2.1.0b git: version 2.27.0 glib2: version 2.64.3 glibc-solibs: version 2.30 glibc-zoneinfo: version 2020a build 1 glibc: version 2.30 gmp: version 6.2.0 gnutls: version 3.6.14 gptfdisk: version 1.0.5 grep: version 3.4 gtk+3: version 3.24.20 gzip: version 1.10 harfbuzz: version 2.6.7 haveged: version 1.9.8 hdparm: version 9.58 hostname: version 3.23 htop: version 2.2.0 icu4c: version 67.1 inetd: version 1.79s infozip: version 6.0 inotify-tools: version 3.20.2.2 intel-microcode: version 20200609 iproute2: version 5.7.0 iptables: version 1.8.5 iputils: version 20190709 irqbalance: version 1.6.0 jansson: version 2.13.1 jemalloc: version 4.5.0 jq: version 1.6 keyutils: version 1.6.1 kmod: version 27 lbzip2: version 2.5 lcms2: version 2.10 less: version 551 libaio: version 0.3.112 libarchive: version 3.4.3 libcap-ng: version 0.7.10 libcgroup: version 0.41 libdaemon: version 0.14 libdrm: version 2.4.102 libedit: version 20191231_3.1 libestr: version 0.1.11 libevent: version 2.1.11 libfastjson: version 0.99.8 libffi: version 3.3 libgcrypt: version 1.8.5 libgpg-error: version 1.38 libgudev: version 233 libidn: version 1.35 libjpeg-turbo: version 2.0.4 liblogging: version 1.0.6 libmnl: version 1.0.4 libnetfilter_conntrack: version 1.0.8 libnfnetlink: version 1.0.1 libnftnl: version 1.1.7 libnl3: version 3.5.0 libpcap: version 1.9.1 libpciaccess: version 0.16 libpng: version 1.6.37 libpsl: version 0.21.0 librsvg: version 2.48.7 libseccomp: version 2.4.3 libssh2: version 1.9.0 libssh: version 0.9.4 libtasn1: version 4.16.0 libtirpc: version 1.2.6 libunistring: version 0.9.10 libusb-compat: version 0.1.5 libusb: version 1.0.23 libuv: version 1.34.0 libvirt-php: version 0.5.5 libvirt: version 6.4.0 libwebp: version 1.1.0 libwebsockets: version 3.2.2 libx86: version 1.1 libxml2: version 2.9.10 libxslt: version 1.1.34 libzip: version 1.7.0 lm_sensors: version 3.6.0 logrotate: version 3.16.0 lshw: version B.02.17 lsof: version 4.93.2 lsscsi: version 0.31 lvm2: version 2.03.09 lz4: version 1.9.1 lzip: version 1.21 lzo: version 2.10 mc: version 4.8.24 miniupnpc: version 2.1 mpfr: version 4.0.2 nano: version 4.9.3 ncompress: version 4.2.4.6 ncurses: version 6.2 net-tools: version 20181103_0eebece nettle: version 3.6 network-scripts: version 15.0 build 9 nfs-utils: version 2.1.1 nghttp2: version 1.41.0 nginx: version 1.16.1 nodejs: version 13.12.0 nss-mdns: version 0.14.1 ntfs-3g: version 2017.3.23 ntp: version 4.2.8p14 numactl: version 2.0.11 oniguruma: version 6.9.1 openldap-client: version 2.4.49 openssh: version 8.3p1 openssl-solibs: version 1.1.1g openssl: version 1.1.1g p11-kit: version 0.23.20 patch: version 2.7.6 pciutils: version 3.7.0 pcre2: version 10.35 pcre: version 8.44 php: version 7.4.7 (CVE-2019-11048) pixman: version 0.40.0 pkgtools: version 15.0 build 33 pm-utils: version 1.4.1 procps-ng: version 3.3.16 pv: version 1.6.6 qemu: version 5.0.0 qrencode: version 4.0.2 reiserfsprogs: version 3.6.27 rpcbind: version 1.2.5 rsync: version 3.1.3 rsyslog: version 8.2002.0 samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704) sdparm: version 1.11 sed: version 4.8 sg3_utils: version 1.45 shadow: version 4.8.1 shared-mime-info: version 2.0 smartmontools: version 7.1 spice: version 0.14.1 sqlite: version 3.32.2 ssmtp: version 2.64 sudo: version 1.9.0 sysfsutils: version 2.1.0 sysvinit-scripts: version 2.1 build 31 sysvinit: version 2.96 talloc: version 2.3.1 tar: version 1.32 tcp_wrappers: version 7.6 tdb: version 1.4.3 telnet: version 0.17 tevent: version 0.10.2 traceroute: version 2.1.0 tree: version 1.8.0 ttyd: version 20200606 usbredir: version 0.7.1 usbutils: version 012 utempter: version 1.2.0 util-linux: version 2.35.2 vbetool: version 1.2.2 vsftpd: version 3.0.3 wget: version 1.20.3 which: version 2.21 wireguard-tools: version 1.0.20200513 wsdd: version 20180618 xfsprogs: version 5.6.0 xkeyboard-config: version 2.30 xorg-server: version 1.20.8 xterm: version 356 xz: version 5.2.5 yajl: version 2.1.0 zlib: version 1.2.11 zstd: version 1.4.5 Linux kernel: version 5.7.2 CONFIG_WIREGUARD: WireGuard secure network tunnel CONFIG_IP_SET: IP set support CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors enabled additional hwmon native drivers enabled additional hyperv drivers firmware added: BCM20702A1-0b05-180a.hcd out-of-tree driver status: igb: using in-tree version ixgbe: using in-tree version r8125: using in-tree version r750: (removed) rr3740a: (removed) tn40xx: (removed) Management: AFP support removed Multiple pool support added Multi-language support added avoid sending spinup/spindown to non-rotational devices get rid of 'system' plugin support (never used) integrate PAM integrate ljm42 vfio-pci script changes webgui: turn off username autocomplete in login form webgui: Added new display setting: show normalized or raw device identifiers webgui: Add 'Portuguese (pt)' key map option for libvirt webgui: Added "safe mode" one-shot safemode reboot option webgui: Tabbed case select window webgui: Updated case icons webgui: Show message when too many files for browsing webgui: Main page: hide Move button when user shares are not enabled webgui: VMs: change default network model to virtio-net webgui: Allow duplicate containers different icons webgui: Allow markdown within container descriptions webgui: Fix Banner Warnings Not Dismissing without reload of page webgui: Network: allow metric value of zero to set no default gateway webgui: Network: fix privacy extensions not set webgui: Network settings: show first DNSv6 server webgui: SysDevs overhaul with vfio-pci.cfg binding webgui: Icon buttons re-arrangement webgui: Add update dialog to docker context menu webgui: Update Feedback.php webgui: Use update image dialog for update entry in docker context menu webgui: Task Plugins: Providing Ability to define Display_Name
  4. 28 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottom of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with nVidia & DVB drivers (currently DigitalDevices, LibreElec, XBOX One USB Adapter and TBS OpenSource drivers selectable) optionally you can also enable ZFS, iSCSI Target, Intel iGPU and Mellanox Firmware Tools (Mellanox only for 6.9.0 and up) support. nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. You also can set the ZFS version from 'latest' to 'master' to build from the latest branch from Github if you are using the 6.9.0 repo of the container. iSCSI Target: Please note that this feature is at the time command line only! The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's. ATTENTION: Always mount a block volume with the path: '/dev/disk/by-id/...' (otherwise you risk data loss)! For instructions on how to create a target read the manuals: Manual Block Volume.txt Manual FileIO Volume.txt ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). Plugin now available (will show all informations about the images/drivers/modules that it can get): https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Or simply download it through the CA App This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Beta Build (the following is a tutorial for v6.9.0): Upgrade to your preferred stock beta version first, reboot and then start building (to avoid problems)! Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' (now you can also put in for example: 'beta25' without quotes to automaticaly download Unraid v6.9.0-beta25 and the other steps are not required anymore) Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) (You can get the files bzimage bzroot bzmodules bzfirmware also from the Beta zip file from Limetch or better you first upgrade to that Beta version and then copying over the files from your /boot directory to the directory created in step 5 to avoid problems) !!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!! Here you can download the prebuilt images: Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 450.66) Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 450.66 | ZFS version: 0.8.4) Unraid Custom DVB builtin v6.8.3: Download (LE driver: 1.4.0) Unraid Custom ZFS builtin v6.8.3: Download (ZFS version: 0.8.4) Unraid Custom iSCSI builtin v6.8.3: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66) Unraid Custom nVidia & DVB builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | ZFS Build from 'master' branch on Github on 2020.08.19) Unraid Custom ZFS builtin v6.9.0 beta25: Download (ZFS Build from 'master' branch on Github on 2020.07.12) Unraid Custom iSCSI builtin v6.9.0 beta25: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04) Unraid Custom nVidia & DVB builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | ZFS v2.0.0-rc2) Unraid Custom ZFS builtin v6.9.0 beta29: Download (ZFS v2.0.0-rc2) Unraid Custom iSCSI builtin v6.9.0 beta29: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta30 Download (nVidia driver: 455.28) Unraid Custom nVidia & DVB builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | ZFS 0.8.5) Unraid Custom ZFS builtin v6.9.0 beta30: Download (ZFS 0.8.5) Unraid Custom iSCSI builtin v6.9.0 beta30: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Notice for Custom pre-built images for Unraid version 6.9.0beta35 and up: Since Unraid changed the game completely with the release of version 6.9.0beta35 and up, so that you can install the Nvidia, DVB and many more other addons to Unraid that I even can't imagine at the time of writing, I will not make or post pre-built images here for the new versions. However I will update the container in general so that you can build your own custom images if you want a 'all in one' solution. (You have to be at least on Unraid 6.9.0beta35 to see the Plugins in the CA App for Nvidia, DVB,...) If you like my work, please consider making a donation
  5. 26 points
    Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. What can you do with WireGuard? Let's walk through each of the connection types: Remote access to server: Use your phone or computer to remotely access your Unraid server, including: Unraid administration via the webgui Access dockers, VMs, and network shares as though you were physically connected to the network Remote access to LAN: Builds on "Remote access to server", allowing you to access your entire LAN as well. Server to server access: Allows two Unraid servers to connect to each other. LAN to LAN access: Builds on "Server to server access", allowing two entire networks to communicate. (see this guide) Server hub & spoke access: Builds on "Remote access to server", except that all of the VPN clients can connect to each other as well. Note that all traffic passes through the server. LAN hub & spoke access: Builds on "Server hub & spoke access", allowing you to access your entire LAN as well. VPN tunneled access: Route traffic for specific Dockers and VMs through a commercial WireGuard VPN provider (see this guide) Remote tunneled access: Securely access the Internet from untrusted networks by routing all of your traffic through the VPN and out Unraid's Internet connection In this guide we will walk through how to setup WireGuard so that your trusted devices can VPN into your home network to access Unraid and the other systems on your network. Prerequisites You must be running Unraid 6.8 with the Dynamix WireGuard plugin from Community Apps Be aware that WireGuard is is technically classified as experimental. It has not gone through a full security audit yet and has not reached 1.0 status. But it is the first open source VPN solution that is extremely simple to install, fast, and designed from the ground up to be secure. Understand that giving someone VPN access to your LAN is just like giving them physical access to your LAN, except they have it 24x7 when you aren't around to supervise. Only give access to people and devices that you trust, and make certain that the configuration details (particularly the private keys) are not passed around insecurely. Regardless of the "connection type" you choose, assume that anyone who gets access to this configuration information will be able to get full access to your network. This guide works great for simple networks. But if you have Dockers with custom IPs or VMs with strict networking requirements, please see the "Complex Networks" section below. Unraid will automatically configure your WireGuard clients to connect to Unraid using your current public IP address, which will work until that IP address changes. To future-proof the setup, you can use Dynamic DNS instead. There are many ways to do this, probably the easiest is described in this 2 minute video from SpaceInvaderOne If your router has UPnP enabled, Unraid will be able to automatically forward the port for you. If not, you will need to know how to configure your router to forward a port. You will need to install WireGuard on a client system. It is available for many operating systems: https://www.wireguard.com/install/ Android or iOS make good first systems, because you can get all the details via QR code. Setting up the Unraid side of the VPN tunnel First, go to Settings -> Network Settings -> Interface eth0. If "Enable bridging" is "Yes", then WireGuard will work as described below. If bridging is disabled, then none of the "Peer type of connections" that involve the local LAN will work properly. As a general rule, bridging should be enabled in Unraid. If UPnP is enabled on your router and you want to use it in Unraid, go to Settings -> Management Access and confirm "Use UPnP" is set to Yes On Unraid 6.8, go to Settings -> VPN Manager Give the VPN Tunnel a name, such as "MyHome VPN" Press "Generate Keypair". This will generate a set of public and private keys for Unraid. Take care not to inadvertently share the private key with anyone (such as in a screenshot like this) By default the local endpoint will be configured with your current public IP address. If you chose to setup DDNS earlier, change the IP address to the DDNS address. Unraid will recommend a port to use. You typically won't need to change this unless you already have WireGuard running elsewhere on your network. Hit Apply If Unraid detects that your router supports UPnP, it will automatically setup port forwarding for you: If you see a note that says "configure your router for port forwarding..." you will need to login to your router and setup the port forward as directed by the note: Some tips for setting up the port forward in your router: Both the external (source) and internal (target/local) ports should be the set to the value Unraid provides. If your router interface asks you to put in a range, use the same port for both the starting and ending values. Be sure to specify that it is a UDP port and not a TCP port. For the internal (target/local) address, use the IP address of your Unraid system shown in the note. Google can help you find instructions for your specific router, i.e. "how to port forward Asus RT-AC68U" Note that after hitting Apply, the public and private keys are removed from view. If you ever need to access them, click the "key" icon on the right hand side. Similarly, you can access other advanced setting by pressing the "down chevron" on the right hand side. They are beyond the scope of this guide, but you can turn on help to see what they do. In the upper right corner of the page, change the Inactive slider to Active to start WireGuard. You can optionally set the tunnel to Autostart when Unraid boots. Defining a Peer (client) Click "Add Peer" Give it a name, such as "MyAndroid" For the initial connection type, choose "Remote access to LAN". This will give your device access to Unraid and other items on your network. Click "Generate Keypair" to generate public and private keys for the client. The private key will be given to the client / peer, but take care not to share it with anyone else (such as in a screenshot like this) For an additional layer of security, click "Generate Key" to generate a preshared key. Again, this should only be shared with this client / peer. Click Apply. Note: Technically, the peer should generate these keys and not give the private key to Unraid. You are welcome to do that, but it is less convenient as the config files Unraid generates will not be complete and you will have to finish configuring the client manually. Configuring a Peer (client) Click the "eye" icon to view the peer configuration. If the button is not clickable, you need to apply or reset your unsaved changes first. If you are setting up a mobile device, choose the "Create from QR code" option in the mobile app and take a picture of the QR code. Give it a name and make the connection. The VPN tunnel starts almost instantaneously, once it is up you can open a browser and connect to Unraid or another system on your network. Be careful not to share screenshots of the QR code with anyone, or they will be able to use it to access your VPN. If you are setting up another type of device, download the file and transfer it to the remote computer via trusted email or dropbox, etc. Then unzip it and load the configuration into the client. Protect this file, anyone who has access to it will be able to access your VPN. About DNS The 2019.10.20 release of the Dynamix Wireguard plugin includes a "Peer DNS Server" option (thanks @bonienl!) If you are having trouble with DNS resolution on the WireGuard client, return to the VPN Manager page in Unraid and switch from Basic to Advanced mode, add the IP address of your desired DNS server into the "Peer DNS Server" field, then install the updated config file on the client. You may want to use the IP address of the router on the LAN you are connecting to, or you could use a globally available IP like 8.8.8.8 This is required for "Remote tunneled access" mode, if the client's original DNS server is no longer accessible after all traffic is routed through the tunnel. If you are using any of the split tunneling modes, adding a DNS server may provide name resolution on the remote network, although you will lose name resolution on the client's local network in the process. The simplest solution is to add a hosts file on the client that provides name resolution for both networks. Complex Networks (updated Feb 20, 2020) The instructions above should work out of the box for simple networks. With "Use NAT" defaulted to Yes, all network traffic on Unraid uses Unraid's IP, and that works fine if you have a simple setup. However, if you have Dockers with custom IPs or VMs with strict networking requirements, things may not work right (I know, kind of vague, but feel free to read the two WireGuard threads for examples) To resolve: In the WireGuard config, set "Use NAT" to No In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of 10.253.0.0/24 you should add this static route: Network: 10.253.0.0/24 (aka 10.253.0.0 with subnet 255.255.255.0) Gateway: <IP address of your Unraid system> On the Docker settings page, set "Host access to custom networks" to "Enabled". see this: https://forums.unraid.net/topic/84229-dynamix-wireguard-vpn/page/8/?tab=comments#comment-808801
  6. 24 points
    All of us at Lime Technology are very excited to announce Larry Meaney as a new full-time hire. Larry has joined us as a Senior Developer/Project Lead. Here's a little more about Larry: Please help us give Larry aka @ljm42 a warm welcome!
  7. 24 points
    tldr: If you are running Unraid OS 6 version 6.8.1 or later, the following does not apply (mitigations are in place). If you are running any earlier Unraid OS 6 release, i.e., 6.8.0 and earlier, please read on. On Jan 5, 2020 we were informed by a representative from sysdream.com of security vulnerabilities they discovered in Unraid OS. Their report is attached to this post. At the time, version 6.8.0 was the stable release. The most serious issue concerns version 6.8.0. Here they discovered a way to bypass our forms-based authentication and look at the contents of various webGUI pages (that is, without having to log in first). Then using another exploit, they were further able to demonstrate the ability to inject "arbitrary code execution". Someone clever enough could use this latter exploit to execute arbitrary code on a server. (That person would have to have access to the same LAN as the server, or know the IP address:port of the server if accessible via the Internet.) Even in versions prior to 6.8.0, the "arbitrary code execution" vulnerability exists if an attacker can get you to visit a webpage using a browser that is already logged into an Unraid server (and they know or can guess the host name of the server). In this case, clicking the link could cause injection of code to the server. This is similar to the CSRF vulnerability we fixed a few years ago. In summary, sysdream.com recognizes 3 vulnerabilities: That it's possible to bypass username/password authentication and access pages directly in v6.8.0. That once authentication is bypassed, it's possible to inject and have server execute arbitrary code. That even if bug #1 is fixed, #2 is still possible if attacker can get you to click a link using browser already authenticated to your Unraid server (6.8.0 and all earlier versions of Unraid 6). Mitigations are as follows: First, if you are running version 6.8.0, either upgrade to latest stable release, or downgrade to an earlier release and install the sysdream mitigation plugin. We are not going to provide a mitigation plugin for 6.8.0. If you are running any 6.6 or 6.7 Unraid release, the best course of action is to upgrade to the latest stable release; otherwise, please install this mitigation plugin: https://raw.githubusercontent.com/limetech/sysdream/master/sysdream.plg This plugin will make a small patch to the webGUI template.php file in order to prevent arbitrary code execution. This plugin will work with all 6.6.x and 6.7.x releases and should also be available via Community Apps within a couple hours. We are not going to provide a mitigation for Unraid releases 6.5.x and earlier. If you are running an earlier release and cannot upgrade for some reason, please send us an email: support@lime-technology.com. I want to thank sysdream.com for bringing this to our attention, @eschultz for initial testing and fixes, and @bonienl for creation of the sysdream mitigation plugin. I also want to remind everyone: please set a strong root password, and carefully consider the implications and security measures necessary if your server is accessible via the Internet. Finally, try and keep your server up-to-date. VULNERABILITY_DISCLOSURE.pdf
  8. 23 points
    @tillkrueger @jenskolson @trurl @unrateable @jonathanm @1812 @Squid since all of you were active in this thread. I found a way to get the file transfer back. Bring up the Guacamole left panel menu (CTRL ALT SHIFT) Input Method = On Screen Keyboard In the On Screen Keyboard, use ALT (it'll stay on, 'pressed') then TAB, select it using TAB, then ALT again (to turn off) A tip I found too, is that anytime doing a copy or move, always best to use the 'queue' button in the pop-up confirmation dialog so that multiple transfers are sequentially handled. It's easy to get to the queue, I found using this it often mitigates much of my need to see the file transfer progress window. The 'Queue Manager' is easy to get back on the screen by using the top menu, Tools > Queue Manager
  9. 22 points
    Something else I wanted to add, as long as we're talking about security measures in the pipe: we are looking at integrating various 2-Factor solutions directly in Unraid OS, such as google authenticator.
  10. 22 points
    Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
  11. 21 points
  12. 21 points
    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
  13. 20 points
    Check out this awesome introduction video produced by @SpaceInvaderOne:
  14. 19 points
    Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The default password for the gameservers if enabled is: Docker It there is a admin password the default password is: adminDocker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). If you like my work please consider Donating for further requests of game server where i don't own the game. Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid If you like my work, please consider making a donation
  15. 19 points
    This is a bug fix and security update release. Due to a security vulnerability discovered in forms-based authentication: ALL USERS ARE STRONGLY ENCOURAGED TO UPGRADE To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Version 6.8.1 2020-01-10 Changes vs. 6.8.0 Base distro: libuv: version 1.34.0 libvirt: version 5.10.0 mozilla-firefox: version 72.0.1 (CVE-2019-17026, CVE-2019-17015, CVE-2019-17016, CVE-2019-17017, CVE-2019-17018, CVE-2019-17019, CVE-2019-17020, CVE-2019-17021, CVE-2019-17022, CVE-2019-17023, CVE-2019-17024, CVE-2019-17025) php: version 7.3.13 (CVE-2019-11044 CVE-2019-11045 CVE-2019-11046 CVE-2019-11047 CVE-2019-11049 CVE-2019-11050) qemu: version 4.2.0 samba: version 4.11.4 ttyd: version 20200102 wireguard-tools: version 1.0.20200102 Linux kernel: version 4.19.94 kernel_firmware: version 20191218_c4586ff (with additional Intel BT firmware) CONFIG_THUNDERBOLT: Thunderbolt support CONFIG_INTEL_WMI_THUNDERBOLT: Intel WMI thunderbolt force power driver CONFIG_THUNDERBOLT_NET: Networking over Thunderbolt cable oot: Highpoint rr3740a: version v1.19.0_19_04_04 oot: Highpoint r750: version v1.2.11-18_06_26 [restored] oot: wireguard: version 0.0.20200105 Management: add cache-busting params for noVNC url assets emhttpd: fix cryptsetup passphrase input network: disable IPv6 for an interface when its settings is "IPv4 only". webgui: Management page: fixed typos in help text webgui: VM settings: fixed Apply button sometimes not working webgui: Dashboard: display CPU load full width when no HT webgui: Docker: show 'up-to-date' when status is unknown webgui: Fixed: handle race condition when updating share access rights in Edit User webgui: Docker: allow to set container port for custom bridge networks webgui: Better support for custom themes (not perfect yet) webgui: Dashboard: adjusted table positioning webgui: Add user name and user description verification webgui: Edit User: fix share access assignments webgui: Management page: remove UPnP conditional setting webgui: Escape shell arg when logging csrf mismatch webgui: Terminal button: give unsupported warning when Edge/MSIE is used webgui: Patched vulnerability in auth_request webgui: Docker: added new setting "Host access to custom networks" webgui: Patched vulnerability in template.php
  16. 19 points
    I was wanting to do GPU Hardware Acceleration with a Plex Docker but unRAID doesn't appear to have the drivers for the GPUs loaded. would be nice to have the option to install the drivers so the dockers could use them.
  17. 18 points
    PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU Here is a guide which explains how to use the container.
  18. 18 points
    Hey Guys, First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release). Hardware and software involved: 2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool. ### TLDR (but I'd suggest to read on anyway 😀) The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly. This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure). Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem. Possible idea for implementation proposed on the bottom. Grateful for any help provided! ### I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below. So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active. Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either. This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM). After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. Finally I decided to put the whole image out of the equation and took the following steps: - Stopped docker from the WebGUI so unRAID would properly unmount the loop device. - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint - Created a share on the cache for the docker files - Created a softlink from /mnt/cache/docker to /var/lib/docker - Started docker using "/etc/rd.d/rc.docker start" - Started my BItwarden containers. Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore. I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!) Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁) I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out. Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down. Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue. I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk. In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes. I'm not attaching diagnostic files since they would probably not point out the needed. Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section. Thanks though for this great product, have been using it so far with a lot of joy! I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick, Cheers!
  19. 18 points
    tldr: If you require hardware support offered by the Linux 5.x kernel then I suggest you remain on 6.8.0-rc7 and wait until 6.9.0-rc1 is published before upgrading. The "unexpected GSO type" bug is looking to be a show stopper for Unraid 6.8 using Linux kernel 5.3 or 5.4 kernel. We can get it to happen easily and quickly simply by having any VM running and then also start a docker App where Network Type has been set to "Custom : br0" (in my case) and I've set a static IP for the container or toggle between setting static IP and letting docker dhcp assign one. There are probably a lot of users waiting for a stable release who will see this issue, and therefore, I don't think we can publish with this bug. The bug does not occur with any 4.19.x or 4.20.x Linux kernel; but does occur with all kernels starting with 5.0. This implies the bug was introduced with some code change in the initial 5.0 kernel. The problem is that we are not certain where to report the bug; it could be a kernel issue or a docker issue. Of course, it could also be something we are doing wrong, since this issue is not reported in any other distro AFAIK. We are continuing investigation and putting together a report to submit either to kernel mailing list or as a docker issue. In any case, an actual fix will probably take quite a bit more time, especially since we are heading into the holidays. Therefore this is what we plan to do: For 6.8: revert kernel to 4.19.87 and publish 6.8.0-rc8. Those currently running stable (6.7.2) will see no loss of functionality because that release is also on 4.19 kernel. Hopefully this will be last or next to last -rc and then we can publish 6.8 stable. Note: we cannot revert to 4.20 kernel because that kernel is EOL and has not had any updates in months. For 6.9: as soon as 6.8 stable is published we'll release 6.9.0-rc1 on next release branch. This will be exactly the same as 6.8 except that we'll update to latest 5.4 kernel (and "unexpected GSO type" bug will be back). We will use the next branch to try and solve this bug. New features, such as multiple pools, will be integrated into 6.10 release, which is current work-in-progress. We'll wait a day or two to publish 6.8-rc8 with reverted kernel in hopes those affected will see this post first.
  20. 18 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. New in Unraid OS 6.8 release: The Update OS tool still downloads the new release zip file to RAM but then extracts directly to USB flash boot device. You will probably notice a slight difference in speed of extract messages. Also the 'sync' command at the end has been replaced with 'sync -f /boot' to prevent spin-up of all devices before the operation is considered complete. Forms based authentication If you have set a root password for your server, when accessing webGUI you'll now see a nice login form. There still is only one user for Unraid so for username enter root. This form should be compatible with all major password managers out there. We always recommend using a strong password. There is no auto-logout implemented yet, please click Logout on menu bar or completely close your browser to logout. Linux kernel We started 6.8 development and initial testing using Linux 5.x kernel. However there remains an issue when VM's and Docker containers using static IP addresses are both running on the same host network interface. This issue does not occur with the 4.19 kernel. We are still studying this issue and plan to address it in the Unraid 6.9 release. Changes to the kernel include: Update to 4.19.88 Include latest Intel microcode for yet another hardware vulnerability mitigation. Default scheduler now 'mq-deadline', but this can be changed via new Settings/Disk Settings/Scheduler setting. Enabled Huge Page support, though no UI control yet. binfmt_misc support. Fix chelsio missing firmware. Added oot: Realtek r8125: version 9.002.02 Removed Highpoint r750 driver [does not work] md/unraid driver Introduced "multi-stream" support: Reads on devices which are not being written should run at full speed. In addition, if you have set the md_write_method tunable to "reconstruct write", then while writing, if any read streams are detected, the write method is switched to "read/modifywrite". Parity sync/check should run at full speed by default. Parity sync/check is throttled back in presence of other active streams. The "stripe pool" resource is automatically shared evenly between all active streams. As a result got rid of some Tunables: md_sync_window md_sync_thresh and added some tunables: md_queue_limit md_sync_limit [-rc2] md_scheduler Please refer to Settings/Disk Settings help text for description of these settings. WireGuard® support - available as a plugin via Community Apps. Our WireGuard implementation and UI is still a work-in-process; for this reason we have made this available as a plugin, though the latest WireGuard module is included in our Linux kernel. I want to give special thanks to @bonienl who wrote the plugin with lots of guidance from @ljm42 - thank you! I also should give a shout out to @NAS who got us rolling on this. If you don't know about WireGuard it's something to look into! Note: WireGuard is a registered trademark of Jason A. Donenfeld. Guide here: WS-Discovery support - Finally you can get rid of SMBv1 and get reliable Windows network discovery. This feature is configured on the Settings/SMB Settings page and enabled by default. Also on same settings page is Enable NetBIOS setting. This is enabled by default, however if you no longer have need for NetBIOS discovery you can turn it off. When turned off, Samba is configured to accept only SMBv2 protocol and higher. Added mDNS client support in Unraid OS. This means, for example, from an Unraid OS terminal session to ping another Unraid OS server on your network you can use (e.g., 'tower'): ping tower.local instead of ping tower Note the latter will still work if you have NetBIOS enabled. User Share File System (shfs) changes: Integrated FUSE-3 - This should increase performance of User Share File System. Fixed bug with hard link support. Previously a 'stat' on two directory entries referring to same file would return different i-node numbers, thus making it look like two independent files. This has been fixed however there is a config setting on Settings/Global Share Settings called "Tunable (support hard links)". The default is Yes, but with certain very old media and DVD players which access shares via NFS, you may need to set this to No. Note: if you have custom config/extra.cfg file, get rid of any lines specifying additional FUSE options unless you know they are compatible with FUSE-3. Other improvements/bug fixes: Fixed SQLite DB Corruption bug. Format - during Format any running parity sync/check is automatically Paused and then resumed upon Format completion. Encryption - an entered passphrase is not saved to any file. Fixed bug where multi-device btrfs pool was leaving metadata set to dup instead of raid1. Fixed bug where quotes were not handled properly in passwords. Numerous base package updates including updating PHP to version 7.3.x, Samba to version 4.11.x. Several other small bug fixes and improvements. Known Issues and Other Errata Some users have reported slower parity sync/check rates for very wide arrays (20+ devices) vs. 6.7 and earlier releases - we are still studying this problem. In another step toward better security, the USB flash boot device is configured so that programs and scripts residing there cannot be directly executed (this is because the 'x' bit is set now only for directories). Commands placed in the 'go' file still execute because during startup, that file is copied to /tmp first and then executed from there. If you have created custom scripts you may need to take a similar approach. AFP is now deprecated and we plan to remove support. A note on password strings Password strings can contain any character however white space (space and tab characters) is handled specially: all leading and trailing white space is discarded multiple embedded white space is collapsed to a single space character. By contrast, encryption passphrase is used exactly as-is. Version 6.8.0 2019-12-10 Base distro: aaa_elflibs: version 15.0 build 16 acpid: version 2.0.32 adwaita-icon-theme: version 3.34.3 at-spi2-atk: version 2.34.1 at-spi2-core: version 2.34.0 at: version 3.2.1 atk: version 2.34.1 bash: version 5.0.011 binutils: version 2.33.1 btrfs-progs: version 5.4 bzip2: version 1.0.8 ca-certificates: version 20191130 cifs-utils: version 6.9 cpio: version 2.13 cryptsetup: version 2.2.2 curl: version 7.67.0 dbus-glib: version 0.110 dbus: version 1.12.16 dhcpcd: version 8.1.2 docker: version 19.03.5 e2fsprogs: version 1.45.4 ebtables: version 2.0.11 encodings: version 1.0.5 etc: version 15.0 ethtool: version 5.3 expat: version 2.2.9 file: version 5.37 findutils: version 4.7.0 freetype: version 2.10.1 fuse3: version 3.6.2 gdbm: version 1.18.1 gdk-pixbuf2: version 2.40.0 git: version 2.24.0 glib2: version 2.62.3 glibc-solibs: version 2.30 glibc-zoneinfo: version 2019c glibc: version 2.30 glu: version 9.0.1 gnutls: version 3.6.11.1 gtk+3: version 3.24.13 harfbuzz: version 2.6.4 haveged: version 1.9.8 hostname: version 3.23 hwloc: version 1.11.13 icu4c: version 65.1 intel-microcode: version 20191115 iproute2: version 5.4.0 iptables: version 1.8.4 iputils: version 20190709 irqbalance: version 1.6.0 kernel-firmware: version 20191118_e8a0f4c keyutils: version 1.6 less: version 551 libICE: version 1.0.10 libX11: version 1.6.9 libXi: version 1.7.10 libXt: version 1.2.0 libarchive: version 3.4.0 libcap-ng: version 0.7.10 libcroco: version 0.6.13 libdrm: version 2.4.99 libedit: version 20191025_3.1 libepoxy: version 1.5.4 libevdev: version 1.7.0 libevent: version 2.1.11 libgcrypt: version 1.8.5 libgudev: version 233 libidn2: version 2.3.0 libjpeg-turbo: version 2.0.3 libnftnl: version 1.1.5 libnl3: version 3.5.0 libpcap: version 1.9.1 libpciaccess: version 0.16 libpng: version 1.6.37 libpsl: version 0.21.0 librsvg: version 2.46.4 libseccomp: version 2.4.1 libssh2: version 1.9.0 libtasn1: version 4.15.0 libusb: version 1.0.23 libvirt-php: version 20190803 libvirt: version 5.8.0 (CVE-2019-10161, CVE-2019-10166, CVE-2019-10167, CVE-2019-10168) libwebp: version 1.0.3 libxml2: version 2.9.10 libxslt: version 1.1.34 libzip: version 1.5.2 lm_sensors: version 3.6.0 logrotate: version 3.15.1 lsof: version 4.93.2 lsscsi: version 0.30 lvm2: version 2.03.07 lz4: version 1.9.1 mkfontscale: version 1.2.1 mozilla-firefox: version 71.0 (CVE-2019-11751, CVE-2019-11746, CVE-2019-11744, CVE-2019-11742, CVE-2019-11736, CVE-2019-11753, CVE-2019-11752, CVE-2019-9812, CVE-2019-11741, CVE-2019-11743, CVE-2019-11748, CVE-2019-11749, CVE-2019-5849, CVE-2019-11750, CVE-2019-11737, CVE-2019-11738, CVE-2019-11747, CVE-2019-11734, CVE-2019-11735, CVE-2019-11740, CVE-2019-11754, CVE-2019-9811, CVE-2019-11711, CVE-2019-11712, CVE-2019-11713, CVE-2019-11714, CVE-2019-11729, CVE-2019-11715, CVE-2019-11716, CVE-2019-11717, CVE-2019-1 1718, CVE-2019-11719, CVE-2019-11720, CVE-2019-11721, CVE-2019-11730, CVE-2019-11723, CVE-2019-11724, CVE-2019-11725, CVE-2019-11727, CVE-2019-11728, CVE-2019-11710, CVE-2019-11709) (CVE-2018-6156, CVE-2019-15903, CVE-2019-11757, CVE-2019-11759, CVE-2019-11760, CVE-2019-11761, CVE-2019-11762, CVE-2019-11763, CVE-2019-11765, CVE-2019-17000, CVE-2019-17001, CVE-2019-17002, CVE-2019-11764) (CVE-2019-11756, CVE-2019-17008, CVE-2019-13722, CVE-2019-11745, CVE-2019-17014, CVE-2019-17009, CVE-2019-17010, CVE-2019-17005, CVE-2019-17011, CVE-2019-17012, CVE-2019-17013) nano: version 4.6 ncurses: version 6.1_20191026 net-tools: version 20181103_0eebece nettle: version 3.5.1 network-scripts: version 15.0 nghttp2: version 1.40.0 nginx: version 1.16.1 (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516) nodejs: version 10.16.3 nss-mdns: version 0.14.1 ntp: version 4.2.8p13 openldap-client: version 2.4.48 openssh: version 8.1p1 openssl-solibs: version 1.1.1d openssl: version 1.1.1d p11-kit: version 0.23.18.1 pcre2: version 10.34 php: version 7.3.12 (CVE-2019-11042, CVE-2019-11041) (CVE-2019-11043) pixman: version 0.38.4 pkgtools: version 15.0 build 28 procps-ng: version 3.3.15 qemu: version 4.1.1 (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091) (CVE-2019-14378, CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-12068, CVE-2019-11091) qrencode: version 4.0.2 rpcbind: version 1.2.5 rsyslog: version 8.1908.0 samba: version 4.11.3 (CVE-2019-10197) (CVE-2019-10218, CVE-2019-14833, CVE-2019-14847) (CVE-2019-14861, CVE-2019-14870) sdparm: version 1.10 sessreg: version 1.1.2 setxkbmap: version 1.3.2 sg3_utils: version 1.44 shadow: version 4.7 shared-mime-info: version 1.15 sqlite: version 3.30.1 sudo: version 1.8.29 sysvinit-scripts: version 2.1 sysvinit: version 2.96 talloc: version 2.3.0 tdb: version 1.4.2 tevent: version 0.10.1 ttyd: version 20191025 usbutils: version 012 util-linux: version 2.34 wget: version 1.20.3 wireguard: version 0.0.20191206 wsdd: version 20180618 build 2 xauth: version 1.1 xclock: version 1.0.9 xfsprogs: version 5.3.0 xkeyboard-config: version 2.28 xorg-server: version 1.20.6 xrandr: version 1.5.1 xterm: version 351 xwininfo: version 1.1.5 zstd: version 1.4.4 Linux kernel: version 4.19.88 CONFIG_BINFMT_MISC: Kernel support for MISC binaries CONFIG_CGROUP_NET_PRIO: Network priority cgroup CONFIG_DEBUG_FS: Debug Filesystem CONFIG_DUMMY: Dummy net driver support CONFIG_HUGETLBFS: HugeTLB file system support CONFIG_ICE: Intel(R) Ethernet Connection E800 Series Support CONFIG_IGC: Intel(R) Ethernet Controller I225-LM/I225-V support CONFIG_IPVLAN: IP-VLAN support CONFIG_IPVTAP: IP-VLAN based tap driver CONFIG_IP_VS: IP virtual server support CONFIG_IP_VS_NFCT: Netfilter connection tracking CONFIG_IP_VS_PROTO_TCP: TCP load balancing support CONFIG_IP_VS_PROTO_UDP: UDP load balancing support CONFIG_IP_VS_RR: round-robin scheduling CONFIG_MLX5_CORE_IPOIB: Mellanox 5th generation network adapters (connectX series) IPoIB offloads support CONFIG_NETFILTER_XT_MATCH_IPVS: "ipvs" match support CONFIG_NET_CLS_CGROUP: Control Group Classifier CONFIG_SCSI_MQ_DEFAULT: SCSI: use blk-mq I/O path by default CONFIG_SCSI_SMARTPQI: Microsemi PQI Driver CONFIG_WIREGUARD: IP: WireGuard secure network tunnel chelsio: add missing firmware change schedulers from modules to built-ins default scheduler now mq-deadline md/unraid: version 2.9.13 (multi-stream support, do not fail read-ahead, more tunables) increase BLK_MAX_REQUEST_COUNT from 16 to 32 oot: Highpoint rr3740a: version: v1.17.0_18_06_15 oot: Highpoint rsnvme: version v1.2.16_19_05_06 oot: Highpoint r750 removed (does not work) oot: Intel ixgbe: version 5.6.5 oot: Realtek r8125: version 9.002.02 oot: Tehuti tn40xx: version 0.3.6.17.2 oot: Tehuti tn40xx: add x3310fw_0_3_4_0_9445.hdr firmware Management: add 'scheduler' tunable for array devices auto-mount hugetlbfs to support kernel huge pages emhttpd: fix improper handling of embedded quote characters in a password emhttpd: correct footer notifications emhttpd: do not write /root/keyfile if encryption passphrase provided via webGUI emhttpd: properly handle encoded passwords emhttpd: solve deadlock issue with 'emcmd' called from a plugin extract OS upgrade directly to USB flash fix btrfs bug where converting from single to multiple pool did not balance metadata to raid1, and converting from multiple to single did not balance metadata back to single. fix shfs hard link initially reported as enabled but not actually enabled fstab: mount USB flash boot device with root-only access nginx.conf: configure all nginx worker threads to run as 'root'. nginx: disable php session expiration php: set very long session timeout samba: if netbios enabled, set 'server min protocol = NT1' shfs: fix bug not accounting for device(s) not mounted yet shfs: support FUSE3 API changes; hard links report same st_ino; hard link support configurable start/stop WireGuard upon server start/shutdown support WS-Discovery method support disabling NetBIOS, and set Samba 'min server procotol' and 'min client protocol' to SMB2 if disabled support forms-based authentication support mDNS local name resolution via avahi unRAIDServer.plg (update OS) now executes 'sync -f /boot' instead of full sync at end of update webgui: Add share access to user edit webgui: Add shares: slashes are not allowed in share name webgui: Add support for the self-hosted Gotify notification agent. webgui: Added 'F1' key to toggle help text webgui: Added AFP deprecated notice webgui: Added UPnP to access script (to support WireGuard plugin) webgui: Added VM XML files to diagnostics webgui: Added cache and disk type to shares page webgui: Added conditional UPnP setting on Management page webgui: Aligned management page layout webgui: Allow Safari to use websockets webgui: Allow outside click to close popups webgui: Change PluginHelpers download to be PHP Curl webgui: Change dashbord link for mb/mem webgui: Changed config folder of TELEGRAM webgui: Dashboard: WG tunnel handshake in days when longer than 24 hours webgui: Dashboard: add up/down arrows to VPN tunnel traffic webgui: Dashboard: adjust column width for themes azure/gray webgui: Dashboard: fix WG direction arrows webgui: Dashboard: fixed user write + read counts webgui: Dashboard: show titles without text-transform webgui: Diagnostics: Adjust for timezone from webGUI webgui: Diagnostics: Remove OSK info from VM xml webgui: Do not display error if docker log files manually deleted webgui: Docker and VM settings: validate path and name input webgui: Docker: fixed multi container updates display oddity webgui: Enable notifications by default webgui: Enhanced display of network settings webgui: Ensure spinner always ontop webgui: Expanded help for Use Cache setting webgui: Fix custom case png not surviving reboot webgui: Fixed diagnostics errors when array was never started webgui: Fixed docker container update state webgui: Fixed misalignment of absent disk on Main page webgui: Fixed popup window in foreground webgui: Fixed typo in help text webgui: Fixed typo in shares settings webgui: Fixed: footer always on foreground webgui: Fixed: undo cleanup of disk.png webgui: Font, Icon and image cleanup webgui: If a page is loaded via https, prevent it from loading resources via http (ie, block mixed content) webgui: Improve Use Cache option webgui: Integrate CAs Plugin Helper webgui: Made notify script compatible with 6.8 new security scheme webgui: Main page: consolidate spin up/down action and device status into one webgui: Modified notify script to allow overriding email recipients in notification settings webgui: Only create session when user successfully logs in; also enable session.use_strict_mode to prevent session fixation attacks webgui: Open banner system to 3rd party apps webgui: Plugin Helpers: Follow redirects on downloads webgui: Rename docker repositories tab to template repositories webgui: Revamp Banner Warning System webgui: Select case correction + replace MD1510 for AVS-10/4 webgui: Standardize on lang="en" webgui: Submit passphrases and passwords in base64 format webgui: Support wireguard plugin in download.php webgui: Switch download routine to be PHP Curl webgui: Syslog: allow up to 5 digits port numbers webgui: Telegram notification agent: enable group chat IDs, update helper description webgui: Unraid fonts and cases update webgui: Update ArrayDevices.page help text webgui: Upgrade noVNC to git commit 9f557f5 webgui: Use complete HTML documents in popups webgui: Warning alert for Format operations webgui: dockerMan - Deprecate TemplateURL webgui: dockerMan: Redownload Icon if URL changes webgui: other minor text corrections webgui: show warning on login page when browser cookies are disabled webgui: support changed tunables on Disk Settings page
  21. 18 points
    Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
  22. 18 points
    I took a stab at writing a How-To based on the feedback in the release thread. If this looks helpful, maybe it could be added to the top post? Edit: this warrants it's own topic, thank you! -tom ----- Upgrading from 6.4.x to 6.5.0 Nothing to it really: Read the first post in the 6.4.1 and 6.5.0 release notes threads Consider disabling mover logging, it just adds noise to diagnostics. New 6.4.1 installs have it disabled by default. Go to Settings -> Scheduler -> Mover Settings Install/Update the Fix Common Problems plugin, then go to Tools -> Update Assistant and click "Run Tests". Whereas the normal FCP checks for potential problems with your *current* version of unRAID, the Update Assistant checks for incompatibilities with the version of unRAID you are *about* to install. It is highly recommended that you resolve any issues before proceeding. If you choose not to run the Update Assistant, you'll want to perform these steps manually: Ensure your server name does not include invalid characters, or you will have problems in 6.5.x. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and the name must be 15 characters or less. To fix this, go to Settings -> Identification and change the "Server name". (see this) Upgrade all your plugins Uninstall the Advanced Buttons plugin, it was newly discovered to be incompatible (see this). You may want to review the next section for other plugins that are known to have problems, if you skipped that when going to 6.4.0. Note that the S3 Sleep plugin works again, feel free to install it if you removed it when going to 6.4.0 (see this). Stop the array (this step is optional, but I like to do it before starting the upgrade) Go to Tools -> Update OS and update the OS. You may need to switch from Next to Stable to see the update. Reboot! Then check out the "Setting Up New Features" section below. Upgrading from 6.3.5 (or earlier?) to 6.5.0 Before you upgrade from 6.3.5 Read the first post in the 6.4.0, 6.4.1 and 6.5.0 release notes threads Consider disabling mover logging, it just adds noise to diagnostics. New 6.4.1 installs have it disabled by default. Go to Settings -> Scheduler -> Mover Settings Install/Update the Fix Common Problems plugin, then go to Tools -> Update Assistant and click "Run Tests". Whereas the normal FCP checks for potential problems with your *current* version of unRAID, the Update Assistant checks for incompatibilities with the version of unRAID you are *about* to install. It is highly recommended that you resolve any issues before proceeding. If you choose not to run the Update Assistant, you'll want to perform these steps manually: Ensure your server name does not include invalid characters, or you will have problems in 6.5.x. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and the name must be 15 characters or less. To fix this, go to Settings -> Identification and change the "Server name". (see this) If you have VMs, go to Settings -> VM Manager, switch to Advanced View, and make sure all of the paths are valid. Here are the default settings, but make sure the paths below actually exist on your system. Without these paths, your VMs will not load under 6.4.1. For more info see this default VM storage path -> /mnt/user/domains/ (this is DOMAINDIR in \\tower\flash\config\domain.cfg) default ISO storage path -> /mnt/user/isos/ (this is MEDIADIR in \\tower\flash\config\domain.cfg) Delete this file from your flash drive: \\tower\flash\config\plugins\dynamix.plg This is an old version of the dynamix webgui. Depending on how old it is, it can prevent the new webgui from loading. (see this, this) Upgrade all your plugins (other than the main unRAID OS plugin) You must uninstall Advanced Buttons (see this) and the Preclear plugin (see this, this, this, this, this, this). Consider uninstalling unmenu (see this), the Pipework docker (see this), and any other plugins you no longer use. Note that the S3 Sleep plugin works again, just make sure you have updated to the latest (see this) Consider installing the Fix Common Problems plugin and resolving any issues it highlights Additional cleanup you may want to perform: Consider deleting all files from the \\tower\flash\extra folder and install them using Nerd Tools instead Review your \\tower\flash\config\go script and use a good editor (like Notepad++, not Notepad) to remove as much as possible. For instance, remove any references to "cache_dirs" and use the Dynamix Cache Directories plugin instead. Consider moving other customizations to the User Scripts plugin Remove any port assignments added on the /usr/local/sbin/emhttp line FYI, a completely stock go script looks like this: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & Consider Installing any BIOS updates that are available for your motherboard If you made any substantial changes in this section, reboot and test to make sure any problems are not the result of these changes. Performing the upgrade from 6.3.5 Stop the array (If 6.3.5 or earlier) Go to Plugins and update the unRAID OS plugin (but don't reboot yet) (If on 6.4.0 or one of the 6.4 rc's) Go to Tools -> Update OS and update the OS (but don't reboot yet). You may need to switch from Next to Stable to see the update. By default, the webgui in unRAID 6.4.1 will use port 80 and any customization you made to the port in your go script will be ignored. If you need to change the port(s) before booting into 6.4.1 for the first time, use a good editor (like Notepad++, not Notepad) and edit \\tower\flash\config\ident.cfg Add the following lines to the end of the file, substituting the ports you want to use: USE_SSL="auto" PORT="80" PORTSSL="443" NOTE: If you need to change the defaults, be sure to pick high values over 1024. i.e. 81 and 43 are *not* good options. Try 8080 and 8443 Once you have booted in 6.4.0, you should no longer edit this file by hand. Better to use the webgui: Settings -> Identification -> Management Access If you are running Ryzen: Edit your \\tower\flash\config\go script (using a good editor like Notepad++ (not Notepad)) and add the "zenstates" command right before "emhttp", like this: /usr/local/sbin/zenstates --c6-disable /usr/local/sbin/emhttp & Also, go into your BIOS and disable "Global C-state control" Reboot It may be helpful to clear your browsers cache --- Setting Up New Features unRAID now supports SSL! unRAID will automatically provision and maintain a Lets Encrypt certificate for you, along with the necessary dynamic DNS entries. To enable this, go to Settings -> Identification -> Management Access and click Provision. If you get an error about rebinding protection, wait 10 minutes and try again. If you still get the error, click Help to read how to adjust your router. Using these certificates will change your url to <some long number>.unraid.net. No it can't be changed without disabling all of the automation and switching to your own certificates. You don't need to remember the number, when you connect via IP or servername it will automatically redirect. If you are concerned about a theoretical DNS outage, know that you can override your DNS by adding an entry to your PC's hosts file. Or in a pinch you can edit \\tower\boot\config\ident.cfg and set USE_SSL="no" If you prefer to not use the fully automated Lets Encrypt certificates, you can set your own domain name and supply your own certificates or use self-signed certificates. In this mode, you are responsible for managing DNS and ensuring the certificates do not expire. Click the Help icon on the SSL Certificate Settings page for more details. Want to check out the new themes? Navigate to Settings -> Display Settings -> Dynamix color theme. You may need to change your banner when you change the theme Note that you can now use the webgui to assign unique IP addresses to your dockers. If you manually customized your macvlans under 6.3.5 you'll need to set them up again using the webgui. You can now disable the insecure telnet protocol. Go to Settings -> Identification -> Management Access and set "Use TELNET" to "No" 6.4.1 adds docker support links to the docker page. Any dockers you install in 6.4.1 or later will automatically get this functionality, to update your existing dockers follow this one-time process Solutions to Common Problems Are you looking for the "edit XML" option for your VMs? First Edit the VM, then click the button in the upper right corner to switch from "Form View" to "XML View" If your system hangs at "loading /bzroot", you need to switch from "legacy" booting to UEFI. The ASRock C236 WSI motherboard in particular needs this, others may as well. (see this, this) To do this: (If needed) rename the "EFI-" folder on the flash drive to "EFI" Go into the bios and set the boot priority #1 to "UEFI: {flash drive}" Starting wth 6.4.1, unRAID now monitors SMART attribute 199 (UDMA CRC errors) for you. If you get an alert about this right after upgrading, you can just acknowledge it, as the error probably happened in the past. If you get an alert down the road, it is likely due to a loose SATA cable. Reseat both ends, or replace if needed. (see this) If your VMs will not start after upgrading, go to Settings -> VM Manager, switch to Advanced View, and make sure all of the paths are valid. Here are the default settings, but make sure the paths actually exist on your system: default VM storage path -> /mnt/user/domains/ (this is DOMAINDIR in \\tower\flash\config\domain.cfg) default ISO storage path -> /mnt/user/isos/ (this is MEDIADIR in \\tower\flash\config\domain.cfg) For more info see this. (the Update Assistant could have warned you of this in advance) If you can't access the webgui after upgrading to 6.5.0, your server name may include characters that are invalid for NETBIOS names. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and it must be 15 characters or less. (see this) To fix this, edit \\tower\flash\config\ident.cfg using a good editor (like Notepad++, not Notepad) and remove those characters from the "NAME" parameter, then reboot. (the Update Assistant could have warned you of this in advance) If you are unable to access the webgui after upgrading, you may have a really old version of the dynamix webgui plugin on your system. Delete \\tower\flash\config\plugins\dynamix.plg and reboot. (see this, this) (the Update Assistant could have warned you of this in advance) If you have problems booting after applying the upgrade, move your flash drive to a Windows or Mac machine and run checkdisk, fixing any problems it finds. If your cache disk has an "incompatible partition" after upgrading, it was probably created in an older version of the Unassigned Devices plugin. UD has been updated so this won't happen again. (the Update Assistant could have warned you of this in advance) This is a one-time fix. You'll need to downgrade to 6.3.5 and move the data off the cache drive, re-format the disk and move the data back. The procedure is outlined here If you have on-board Aspeed IPMI you may find that IPMI loses video or changes color during the boot process. To resolve this, go to Main -> Boot Device -> Flash -> Syslinux Config and add "nomodeset" to your "append" line (and reboot). It should look something like this: label unRAID OS kernel /bzimage append initrd=/bzroot nomodeset You'll probably want to repeat that on each of the other append lines in this file. If you don't see CPU load statistics on the dashboard, or if you can't use the new web-based terminal, switch to Chrome or Firefox instead of Safari. This is a shortcoming in Safari, not a bug in unRAID. Doesn't apply in 6.5.0 Are you having problems with your Lets Encrypt docker? First, make sure the webgui and docker aren't both trying to use the same port. Beyond that, Lets Encrypt recently made some changes that break things. These issues are not related to the unRAID upgrade, refer to the LE docker thread for help Speedtest complaining about Python? The timing is coincidental, but it is not related to the unRAID upgrade, see the Speedtest thread If you are unable to update your dockers, or if you see this error in the syslog: Feb 3 12:08:59 Unraid-Nas [6503]: The command failed. Error: sh: /usr/local/emhttp/usr/bin/docker: No such file or directory then you need to uninstall the Advanced Buttons plugin. It is not currently compatible with 6.4.1+ See this (the Update Assistant could have warned you of this in advance) If you get this error message when trying to install the preclear plugin: unRAID version (6.4.1 - GCC 7.3.0) not supported. please re-read the "before you upgrade" section of this post. The preclear plugin is not compatible with 6.4.1+ If you see this error message in your logs, please re-read the "before you upgrade" section of this post. rc.diskinfo is part of the preclear plugin, which is not compatible with 6.4.1 (the Update Assistant could have warned you of this in advance) Feb 6 06:21:30 Tower rc.diskinfo[17255]: PHP Warning: file_put_contents(): Only 0 of 2584 bytes written, possibly out of free disk space in /etc/rc.d/rc.diskinfo on line 499 Are you seeing a "wrong csrf_token" token in your logs? Close all your browser tabs (on all computers) that were pointed at unRAID prior to the last reboot. More info If you have severe errors (no lan, array won't start, webgui won't start) try installing on a new flash drive. If it works, that means the problem is with one of your customizations. You can either try to find and fix the problem (if you skipped the "before you upgrade" section, that would be a good place to start), or move forward with the clean flash drive. To continue with the clean drive, copy just the basics (/config/super.dat and your key file) from the old drive to the new one and then reconfigure as needed. Are you still having problems? Review the expanded "Before you upgrade" section above. If that doesn't help, grab your diagnostics (Tools -> Diagnostics) if you can, and reboot into Safe Mode. If your problems go away, then the problem is likely with a plugin. If the problems persist, you'll need additional help. Either way, grab your diagnostics again while in safe mode and attach both sets to a forum post where you clearly explain the issue.
  23. 17 points
    I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments. While @limetech has developed a great base product i think it’s right to acknowledge that much of the popularity and success of the product is down as much to community development and support (which is head and shoulders above by comparison) as it is to the work of the company. As a now outsider looking in, my personal observation is that the use of unRAID exploded due to the availability of stable, regularly updated media apps like Plex (the officially supported one was just left to rot) and then exploded again with the emergence of the @linuxserver.ionVidia build and the support that came with it. Given the efforts of the community and groups like @linuxserver.io is even used in unRAID marketing I feel this is a show of poor form. I feel frustrated at Tom’s “I didn’t know I needed permission ....” comment as it isn’t about that. It’s about respect and communication. A quick “call” to the @linuxserver.io team to let them know of the plan (yes I know the official team don’t like sharing plans at risk of setting expectations they then won’t meet) to (even privately) acknowledge the work that has (and continues to) contribute to the success of unRAID and let them be a part of it would have cost Nothing but would have been worth so much. I know the guys would have been supporting too. I hope the two teams can work it out and that @limetech don’t forget what (and who) helped them get to where they are and perhaps looks at other companies who have alienated their community through poor decisions and communication. Don’t make this the start of a slippery slide.
  24. 17 points
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4/5 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s JMicron JMB585 PCIe gen3 x2 (1970MB/s) - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as for an LSI 9211-8i and other similar controllers 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake, Coffee Lake and Canon Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  25. 17 points
    v6.8.2 uploaded. Delayed for a few reasons, had problems (and still do) with the nvidia container runtime, worked around it in the end, but not a long term solution looking forward, I'm working like a dog at the moment as my current real life job finishes in 2 days and I'm having to put a ton of extra hours in, wife a bit ungainly at the moment as very heavily pregnant so I'm having to do a bit more for our existing beast, and to add to that bass_rock has been away for work, so kind of a perfect storm of not having much time to sit down with this, although I have been trying to get it working every chance I've had. Anyways, I've tested this version, think everything is working, and I believe all the out of tree drivers are squared away. Last version (v6.8.1) might have been missing the Intel 1gb driver as I hadn't realised that it was different to the 10gb driver.
  26. 17 points
    Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
  27. 17 points
    We have this implemented for 6.8 release.
  28. 17 points
    SSH into the server or use the console and type: mover stop
  29. 16 points
    Community Applications (aka CA) This thread is rather long (and is mostly all off-topic), and it is NOT necessary to read it in order to utilize Community Applications (CA) Just install the plugin, go to the apps tab and enjoy the freedom. If you find an issue with CA, then don't bother searching for answers in this thread as all issues (when they have surfaced) are fixed generally the same day that they are found... (But at least read the preceding post or two on the last page of the thread) - This is without question, the best supported plugin / addon in the universe - on any platform. Simple interface and easy to use, you will be able to find and install any of the unRaid docker or plugin applications, and also optionally gain access to the entire library of applications available on dockerHub (~1.8 million) INSTALLATION To install this plugin, paste the following URL into the Plugins / Install Plugin section: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg After installation, a new tab called "Apps" will appear on your unRaid webGUI. To see what the various icons do, simply press Help or the (?) on unRaid's Tab Bar. Note All screenshots in this post are subject to change as Community Applications continues to evolve Easily search or browse applications Get full details on the application Easily reinstall previously installed applications And much, much more (including the ability to search for and install any of the containers available on dockerHub (1,000,000+) Multi-Language Installations When running on a supported version of Unraid that supports Multi-Language (6.9.0-beta22+), CA is the recommended way to install any of the Language Packs available. See this post for more detail USING CA CA also has a dedicated Settings section (click Settings) which will let you fine tune certain aspects of its operation. NOTE: The following video was made previously to the current user interface, so the video will look significantly different than the plugin itself. But it's still worth a watch. Note that CA is always (and always will be) compatible with the latest Stable version of unRaid, and the Latest/Next version of unRaid. Intermediate versions of various Release Candidates may or may not be compatible (though they usually are - But, if you have made the decision to run unRaid Next, then you should also ensure that all plugins and unRaid itself (not just CA) are always up to date). Additionally, every attempt is made to keep CA compatible with older versions of unRaid. As of this writing, CA is compatible with all versions of unRaid from 6.4 onward. Require a proxy? See this post for CA to operate through a proxy Cookie Note: CA utilizes cookies in its regular operation. Some features of CA may not be available if cookies are not enabled in your browser. No personally identifiable information is ever collected, no cookies related to any software or media stored on your server are ever collected, and none of the cookies are ever transmitted anywhere. Cookies related to the "Look & Feel" of Community Applications will expire after a year. Any other cookies related to the operation of CA are automatically deleted after they are used. Multi-language Note: When running on a version of unRaid that supports multi-language, CA will operate in the language of your choice. However, translations of the descriptions of the applications themselves are outside the scope of the translations, and will always appear in whatever the author themselves has dictated (ie: English) Contribute towards development (or simply buy me a beer) Credits Development Andrew Zawadzki Additional Contributions bonienl, eschultz GUI Layout Design Mex Application Feed Andrew Zawadzki, Kode, Limetech Additional Testing CHBMB, SpaceInvaderOne, Sparklyballs, wgstarks, DJoss, Zer0Nin3r, Mex, prostuff1, bonienl, ljm42, kizer, trurl Moderation dockerPolice, pluginCop Additional Libraries Awesomeplete (Lea Verou), Chart.js (Various), XML2Array, Array2XML (Miles Johnson), chartjs-plugin-trendline (Marcus Alsterfjord), sprintf.js (Alexandru Mărășteanu) Copyright © 2015-2020 Andrew Zawadzki For the details regarding the various policies that Community Applications has regarding applications, see here
  30. 16 points
    I've have been following this and the other thread with very mixed feelings and I feel the community is unjustly hard towards @limetech. Sure some things could have been handled better, yet I keep the feelings that the bigger injustice is not actually committed by him. In order to understand things better and to see things from a different perspective I personally like to make analogies. Sometimes it gives different insights into situations. And I cam up with the following for this one: We have 3 parties here, The parent (@limetech), the uncle (@CHBMB and the like) and the kid (the community). Now the situation is that the kid is asking the parent for this shiny new toy, but for whatever reason the parent is not buying the kid the toy. Maybe it is to expensive, maybe he is waiting for the birthday, whatever.. However, the uncle who hears the kid decided to get the kid this new toy, because he loved the kid and wants to please the kid. Fast forward and the parents sees that the kid really loved the toy but unfortunately the toy has some sharp edges and the parent is afraid the kid might hurt himself hence the parent decided to order a better and safer version of the toy. However, when the parent tells the kid it ordered this new toy the uncle hears the parent and flies into a rage because the parent did not tell the uncle that he/she was going to buy the new toy and the uncle thinks the parents is ungrateful because he/she did not even thank the uncle. In his rage therefore the uncle takes the toy away from the kid even before the new toy arrived (it is after all still in beta). Not only that but takes away the other toys he got the kid as well and says he is never going to give the kid any more toys. All this to punish the parent. Now with this analogy, ask yourself. Is the reaction of @CHBMB (the uncle) proportionate and justified? Does a parent (@limetech) need to inform the uncle of these kind of things? Sure it is nice, but is it really needed? Do you think it is right for the uncle to punish the kid? Should the parent even be grateful that the uncle presents the kid a toy with sharp edges (I know I wouldn't). The only one the uncle should expect thanks from i.m.o is the kid. The community is and was grateful. Yet @CHBMB is the one who decided to punish the community and take away their toy because of his hurt feelings. Yet the only one who gets shit is @limetech. If I where him I would be more than a little pissed and disappointment and I think it shows in his messages. Please read my analogy again and ask yourself who in the story did anything to hurt the kid? The parent or the uncle? And please also think about the fact that we have no way of knowing if @limetech was not going to thanks @CHBMB for the work in an official release note, which this wasn't. Now I do think the parent should have said something to the uncle. And I also am a bit disappointment to learn that even though UnRaid builds heavy on the community there is no special channel in place to facilitate communication with reliable community develops. Considering how well the development of both UnRaid and the community add-ons go together I kind of assumes something was already in place. However it seems this is something that is considered and worked on now. But in everything that happened, this simple miscommunication seems far the lesser evil here. And I do think it might be good that the community asks itself again who really is to blame for taking away it's shining toy with sharp edges and if it is reasonable to have this reaction. But that's just my 2 cents.
  31. 15 points
    I come here to see what's new in development and find that there is a big uproar. Hate to say it, but I've been here a long time and community developers come and go and that's just the way it is. This unRAID product opens the door to personalizations, both private and shared. Community developers do leave because they feel that unRAID isn't going in the direction they want it to go or that the unRAID developers aren't listening to them even though there is no obligation to do so. Some leave in a bigger fuss than others. The unRAID developers do the best they can at trying to create a product that will do what the users want. They also do their best to support the product and the community development. The product is strong and the community support is strong and new people willing to put in time supporting it will continue to appear. Maybe some hint of what was coming might have eased tensions, but I just can't get behind users taking their ball and going home because unRAID development included something they used to personally support. That evolution has happened many times over the years, both incrementally and in large steps. That's the nature of this unRAID appliance type OS as it gets developed. There is no place for lingering bad feelings and continuing resentful posts. Hopefully, the people upset can realize that the unRAID developers are simply trying to create a better product, that they let you update for free, without any intent to purposely stomp on community developers.
  32. 15 points
    There are several things you need to check in your Unraid setup to help prevent the dreaded unclean shutdown. There are several timers that you need to adjust for your specific needs. There is a timer in the Settings->VM Manager->VM Shutdown time-out that needs to be set to a high enough value to allow your VMs time to completely shutdown. Switch to the Advanced View to see the timer. Windows 10 VMs will sometimes have an update that requires a shutdown to perform. These can take quite a while and the default setting of 60 seconds in the VM Manager is not long enough. If the VM Manager timer setting is exceeded on a shutdown, your VMs will be forced to shutdown. This is just like pulling the plug on a PC. I recommend setting this value to 300 seconds (5 minutes) in order to insure your Windows 10 VMs have time to completely shutdown. The other timer used for shutdowns is in the Settings->Disk Settings->Shutdown time-out. This is the overall shutdown timer and when this timer is exceeded, an unclean shutdown will occur. This timer has to be more than the VM shutdown timer. I recommend setting it to 420 seconds (7 minutes) to give the system time to completely shut down all VMs, Dockers, and plugins. If you have remote SMB or NFS mounts in Unassigned Devices you need to account for time for them to time out if the remote server has gone off-line when unmounting. I recommend about 45 seconds for each remote mount. They are unmounted sequentially, so you need to account for 45 seconds for each one. These timer settings do not extend the normal overall shutdown time, they just allow Unraid the time needed to do a graceful shutdown and prevent the unclean shutdown. One of the most common reasons for an unclean shutdown is having a terminal session open. Unraid will not force them to shut down, but instead waits for them to be terminated while the shutdown timer is running. After the overall shutdown timer runs out, the server is forced to shutdown. If you have the Tips and Tweaks plugin installed, you can specify that any bash or ssh sessions be terminated so Unraid can be gracefully shutdown and won't hang waiting for them to terminate (which they won't without human intervention). If you server seems hung and nothing responds, try a quick press of the power button. This will initiate a shutdown that will attempt a graceful shutdown of the server. If you have to hold the power button to do a hard power off, you will get an unclean shutdown. If an unclean shutdown does occur because the overall "Shutdown time-out" was exceeded, Unraid will attempt to write diagnostics to the /log/ folder on the flash drive. When you ask for help with an unclean shutdown, post the /log/diagnostics.zip file. There is information in the log that shows why the unclean shutdown occurred.
  33. 15 points
    Is anybody using docker compose? Are there any plans to integrate it with unRAID?
  34. 15 points
    When my job, wife, daughter and sleep allow me to fit it in. For crying out loud, stop asking people. It's ready when it's ready. Now if you'll excuse me I have a game of hide and seek to play with my daughter. Sent from my Mi A1 using Tapatalk
  35. 15 points
  36. 14 points
    New in this release: GPU Driver Integration Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD). These drivers are blacklisted by default via 'conf' files in /etc/modprobe.d: /etc/modprobe.d/ast.conf /etc/modprobe.d/amdgpu.conf /etc/modprobe.d/i915.conf /etc/modprobe.d/radeon.conf Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel. However it is possible to override the settings in these files by creating the directory 'config/modprobe.d' on your USB flash boot device and then creating the same named-file in that directory. For example, to unblacklist amdgpu type these commands in a Terminal session: mkdir /boot/config/modprobe.d touch /boot/config/modprobe.d/amdgpu.conf When Unraid OS boots, before the Linux kernel executes device discovery, we copy any files from /boot/config/modprobe.d to /etc/modprobe.d. Since amdgpu.conf on the flash is an empty file, it will effectively cancel the driver from being blacklisted. This technique can be used to set boot-time options for any driver as well. Better Support for Third Party Drivers Recall that we distribute Linux modules and firmware in separate squashfs files which are read-only mounted at /lib/modules and /lib/firmware. We now set up an overlayfs on each of these mount points, making it possible to install 3rd party modules at boot time, provided those modules are built against the same kernel version. This technique may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and bzroot files. To go along with the other GPU drivers included in this release, we have created a separate installable Nvidia driver package. Since each new kernel version requires drivers to be rebuilt, we have set up a feed that enumerates each driver available with each kernel. The easiest way to install the Nvdia driver, if you require it, is to make use of a plugin provided by Community member @ich777. This plugin uses the feed to install the correct driver for the currently running kernel. A big thank you! to @ich777 for providing assistance and coding up the the plugin: Linux Kernel This release includes Linux kernel 5.8.18. We realize the 5.8 kernel has reached EOL and we are currently busy upgrading to 5.9. Version 6.9.0-beta35 2020-11-12 (vs -beta30) Base distro: aaa_elflibs: version 15.0 build 25 brotli: version 1.0.9 build 2 btrfs-progs: version 5.9 ca-certificates: version 20201016 curl: version 7.73.0 dmidecode: version 3.3 ethtool: version 5.9 freetype: version 2.10.4 fuse3: version 3.10.0 git: version 2.29.1 glib2: version 2.66.2 glibc-solibs: version 2.30 build 2 glibc-zoneinfo: version 2020d glibc: version 2.30 build 2 iproute2: version 5.9.0 jasper: version 2.0.22 less: version 563 libcap-ng: version 0.8 build 2 libevdev: version 1.10.0 libgcrypt: version 1.8.7 libnftnl: version 1.1.8 librsvg: version 2.50.1 libwebp: version 1.1.0 build 3 libxml2: version 2.9.10 build 3 lmdb: version 0.9.27 nano: version 5.3 ncurses: version 6.2_20201024 nginx: version 1.19.4 ntp: version 4.2.8p15 build 3 openssh: version 8.4p1 build 2 pam: version 1.4.0 build 2 rpcbind: version 1.2.5 build 2 samba: version 4.12.9 (CVE-2020-14318 CVE-2020-14318 CVE-2020-14318) talloc: version 2.3.1 build 4 tcp_wrappers: version 7.6 build 3 tdb: version 1.4.3 build 4 tevent: version 0.10.2 build 4 usbutils: version 013 util-linux: version 2.36 build 2 vsftpd: version 3.0.3 build 7 xfsprogs: version 5.9.0 xkeyboard-config: version 2.31 xterm: version 361 Linux kernel: version 5.8.18 added GPU drivers: CONFIG_DRM_RADEON: ATI Radeon CONFIG_DRM_RADEON_USERPTR: Always enable userptr support CONFIG_DRM_AMDGPU: AMD GPU CONFIG_DRM_AMDGPU_SI: Enable amdgpu support for SI parts CONFIG_DRM_AMDGPU_CIK: Enable amdgpu support for CIK parts CONFIG_DRM_AMDGPU_USERPTR: Always enable userptr write support CONFIG_HSA_AMD: HSA kernel driver for AMD GPU devices kernel-firmware: version 20201005_58d41d0 md/unraid: version 2.9.16: correction recording disk info with array Stopped; remove 'superblock dirty' handling oot: Realtek r8152: version 2.14.0 Management: emhttpd: fix 'auto' setting where pools enabled for user shares should not be exported emhttpd: permit Erase of 'DISK_DSBL_NEW' replacement devices emhtptd: track clean/unclean shutdown using file 'config/forcesync' emhttpd: avoid unnecessarily removing mover.cron file modprobe: blacklist GPU drivers by default, config/modprobe.d/* can override at boot samba: disable aio by default startup: setup an overlayfs for /lib/modules and /lib/firmware webgui: pools not enabled for user shares should not be selectable for cache webgui: Add pools information to diagnostics webgui: vnc: add browser cache busting webgui: Multilanguage: Fix unable to delete / edit users webgui: Prevent "Add" reverting to English when adding a new user with an invalid username webgui: Fix Azure / Gray Switch Language being cut-off webgui: Fix unable to use top right icons if notifications present webgui: Changed: Consistency between dashboard and docker on accessing logs webgui: correct login form wrong default case icon displayed webgui: set 'mid-tower' default case icon webgui: fix: jGrowl covering buttons webgui: New Perms: Support multi-cache pools webgui: Remove WG from Dashboard if no tunnels defined webgui: dockerMan: Allow readmore in advanced view webgui: dockerMan: Only allow name compatible with docker
  37. 14 points
    Hi guys, this is a simple plugin that allows users to clear their disks before add them to the array. The main characteristics of this plugin are: Modularity: can be used standalone or in conjunction with Joe L. or bjp999 scripts; Ease of use: with a few clicks you can start a clear session on your disk; Integration: you can always access the plugin under Tools > Preclear Disk menu. If you have Unassigned Devices installed, you can start/stop/view preclear sessions directly from Main > Unassigned Devices. All dependencies included: you don't need SCREEN to run a preclear session in the background; all jobs are executed in the background by default, so you can close your browser while the preclear runs. You can install it directly or via Community Apps. Q & A: Q) Why Joe L. or bjp999 scripts are not included? A) I'm not authorized by Joe L. to redistribute his script, so you need to download a copy from the topic above and put it under /boot/config/plugins/preclear.disk/ renaming it to preclear_disk.sh if necessary. bjp999 modifications are unofficial, so I decided not to include it by default. A) the bjp999 script is now included by default. Q) By default, I see a "gfjardim" script always available. Why? A) Since I'm not authorized to redistribute Joe L. script and the recent slow support by the author, I decided a major code rewrite in the script was needed. The new script is being actively supported, compatible with unRAID notifications, is faster than bjp999 script and has a cleaner output so users can easily visualizes what's going on with their disks. Q) I want to use one of the older scripts(Joe L. or bjp999) in conjunction with notifications. Is that possible? A) Yes. I've made some adjustments on both scripts so they become compatible with unRAID notifications; Joe L. version can be found here and bjp999 can be found here. A) the bjp999 script is now included by default; it includes support for Unraid notifications. Q) Is there any howtos available? A) gridrunner made a awesome video explaining why preclearig a hard disk is a good idea, and how you can accomplish that: Q) The plugin asked me to send some statistics information. How does the statistics report system work? Is it safe? Is it anonymous? A) To better track the usage of the plugin, a statistics report system was put in place. The main goals I intend to archive are: know number of disks that gets precleared; fix any silent bugs that gets reported on the logs; know average size of disks, their model, their average speed and elapsed time we should expect from that model; success rate; rate of disks with SMART problems; This system is totally optional and users will get prompted if they want to send each report. It is also safe and totally anonymous, since all data is sent to Google Forms and no identifying data is exported, like disks serial numbers. Detailed info can be found here. The statistics are public and can be found here. Q) How can I download a copy of the plugin log? A) Please go to Tools, then Pleclear Disk, and click on the Download icon: Q) Which are the differences between Erase and Clear? A) The Clear option uses zeroes to fill the drive; at the end, the drive can be added to the array the array immediately. The Erase All the Disk option uses random data to wipe out the drive; the resulting drive can't be quickly added to the array. If you want to add if after erase, you must select Erase and Clear the Disk. Troubleshooting: Q) After Zeroing the disk, the Post-Read operation fails saying my drive isn't zeroed. A) When zeroing the disk, the script uses a zero filled data stream produced by the pseudo-device /dev/zero. If a Post-Read fails just after a Zeroing operation, chances are that you have bad RAM memory, or less frequently bad PSU, bad cables or bad SAS/SATA card. Please run some rounds of MEMTEST on your machine to test your RAM modules. Q) A Pre-Read operation failed and I see Pending Sectors on the SMART report. A) Pending Sectors will lead to read errors, and the Pre-Read operation will fail. To force the hard drive firmware to remap those sectors, you have to run a Preclear session with the Skip Pre-Read option checked. Q) I've lost communication with the webgui, can I manage preclear sessions from the terminal? A) Yes, you can. If you lost communication with the webgui or want to use the command line interface to manage your preclear sessions, you just need to type preclear on your terminal to start/stop or observe a preclear session.
  38. 14 points
    I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its true that bug reports based on unofficial builds complicate matters. Also its maybe frustrating that people are reluctant to go the extra mile to go back to stock unraid and try to reproduce the error there. Especially since they might be convinced (correctly or not) it has nothing to do with the unoffial build. Granted from an engineers point of view that might be seen as a nuisance. But from a customer driven business point of view its a self destructive perspective. Obviously these builds fill a need that unraid could not, or else they would not exist and there wouldn't be enough people using them to be a "bug hunting" problem in the first place. They expand unraids capabilities, bring new customers to unraid, demonstrate a lively and active community and basically everything i love about unraid. I think @limetech did not mean it in that way, but i can fully see how people who poured a lot of energy and heart into the unraid ecosystem might perceive it that way. I think if you would have said instead: "Finally we incorporated these new features x,y, and z formerly only available in the builds by A, B and C. Thanks again for your great work A,B and C have being doing for a long while now and for showing us in what way we can enhance unraid for our customers. I took a long time, but now its here. It should also make finding bugs more easy, as many people can now use the official builds." then everybody would have been happy. I think its probably a misunderstanding. I can't really imagine you really wanting to discourage the community from making unraid reach out to more user.
  39. 14 points
    Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
  40. 14 points
    You've obviously got some ideas, why not do it? Problem is I see time and time again, is people keep telling us what we should be doing and how quick we should be doing it, now, don't be offended because this is a general observation, rather than personal. It's ten to one in the morning, I've just got back from work, I have a toddler that is going to get up in about five hours, my wife is heavily pregnant, Unraid Nvidia and beta testing just isn't up there in my list of priorities at this point. I've already looked at it and I need to look at compiling the newly added WireGuard out of tree driver. I will get around to it, but when I can. And if that means some Unraid users have to stick on v6.8.0 for a week or two then so be it, or, alternatively, forfeit GPU transcoding for a week or two, then so be it. I've tried every way I could when I was developing this to avoid completely repacking Unraid, I really did, nobody wanted to do that less than me. But, if we didn't do it this way, then we just saw loads of seg faults. I get a bit annoyed by criticism of turnaround time, because, as this forum approaches 100,000 users, how many actually give anything back? And of all the people who tell us how we should be quicker, how many step up and do it themselves? TL:DR It'll be ready when it's ready, not a moment sooner, and if my wife goes into labour, well, probably going to get delayed. My life priority order: 1. Wife/kids 2. Family 3. Work (Pays the mortgage and puts food on the table) @Marshalleq The one big criticism I have is comparing this to ZFS plugin, no disrespect, that's like comparing apples to oranges. Until you understand, and my last lengthy post on this thread might give you some insight. Please refrain from complaining. ZFS installs a package at boot, we replace every single file that makes up Unraid other than bzroot-gui. I've said it before, I'll say it again. WE ARE VOLUNTEERS Want enterprise level turnaround times, pay my wages.
  41. 14 points
    This was an interesting one, builds completed and looked fine, but wouldn't boot, which was where the fun began. Initially I thought it was just because we were still using GCC v8 and LT had moved to GCC v9, alas that wasn't the case. After examining all the bits and watching the builds I tried to boot with all the Nvidia files but using a stock bzroot, which worked. So then tried to unpack and repack a stock bzroot, which also reproduced the error. And interestingly the repackaged stock bzroot was about 15mb bigger. Asked LT if anything had changed, as we were still using the same commands as we were when I started this back in ~June 2018. Tom denied anything had changed their end recently. Just told us they were using xz --check=crc32 --x86 --lzma2=preset=9 to pack bzroot with. So changed the packaging to use that for compression, still wouldn't work. At one point I had a repack that worked, but when I tried a build again, I couldn't reproduce it, which induced a lot of head scratching and I assumed my version control of the changes I was making must have been messed up, but damned if I could reproduce a working build, both @bass_rock and me were trying to get something working with no luck. Ended up going down a rabbit hole of analysing bzroot with binwalk, and became fairly confident that the microcode prepended to the bzroot file was good, and it must be the actual packaging of the root filesystem that was the error. We focused in on the two lines relevant the problem being LT had given us the parameter to pack with, but that is receiving an input from cpio so can't be fully presumed to be good, and we still couldn't ascertain that the actual unpack was valid, although it looked to give us a complete root filesystem. Yesterday @bass_rock and I were both running "repack" tests on a stock bzroot to try and get that working, confident that if we could do that the issue would be solved. Him on one side of the pond and me on the other..... changing a parameter at a time and discussing it over Discord. Once again managed to generate a working bzroot file, but tested the same script again and it failed. Got to admit that confused the hell out of me..... Had to go to the shops to pick up some stuff, which gave me a good hour in the car to think about things and I had a thought, I did a lot of initial repacking on my laptop rather than via an ssh connection to an Unraid VM, and I wondered if that may have been the reason I couldn't reproduce the working repack. Reason being, tab completion on my Ubuntu based laptop means I have to prepend any script with ./ whereas on Unraid I can just enter the first two letters of the script name and tab complete will work, obviously I will always take the easiest option. I asked myself if the working build I'd got earlier was failing because it was dependent on being run using ./ and perhaps I'd run it like that on the occasions it had worked. Chatted to bass_rock about it and he kicked off a repackaging of stock bzroot build with --no-absolute-filenames removed from the cpio bit and it worked, we can only assume something must have changed LT side at some point. To put it into context this cpio snippet we've been using since at least 2014/5 or whenever I started with the DVB builds. The scripts to create a Nvidia build are over 800 lines long (not including the scripts we pull in from Slackbuilds) and we had to change 2 of them........ There are 89 core dependencies, which occasionally change with an extra one added or a version update of one of these breaks things. I got a working Nvidia build last night and was testing it for 24 hours then woke up to find FML Slackbuilds have updated the driver since. Have run a build again, and it boots in my VM. Need to test transcoding on bare metal but I can't do that as my daughter is watching a movie, so it'll have to wait until either she goes for a nap or the movie finishes. Just thought I'd give some background for context, please remember all the plugin and docker container authors on here do this in our free time, people like us, Squid, dlandon, bonienl et al put a huge amount of work in, and we do the best we can. Comments like this are not helpful, nor appreciated, so please read the above to find out, and get some insight into why you had to endure the "exhaustion" of constant reminders to upgrade to RC7. Comments like this are welcome and make me happy..... EDT: Tested and working, uploading soon.
  42. 14 points
    I've been doing this for a long time now via command line with my important VM's. First, my VM vdisk's are in the domains share, where I have created the individual VM directory as a btrfs subvolume instead of a normal directory, ie: btrfs subv create /mnt/cache/domains/my-vm results in: /mnt/cache/domains/my-vm <--- a btrfs subvolume Then let vm-manager create vdisks in here normally and create your VM. Next, when I want to take a snapshot I hibernate the VM (win10) or shut it down. Then from host: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup Of course you can name the snapshot anything, perhaps include a timestamp. In my case, after taking this initial backup snapshot, a subsequent backup will do something like this: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup-new Then I send the block differences to a backup directory on /mnt/disk1 btrfs send -p /mnt/cache/domains/myh-vm/backup /mnt/cache/domains/myh-vm/backup-new | pv | btrfs receive /mnt/disk1/Backup/domains/my-vm and then delete backup and rename backup-new to backup. What we want to do is add option in VM manager that says, "Create snapshot upon shut-down or hibernation" and then add a nice GUI to handle snapshots and backups. I have found btrfs send/recv somewhat fragile which is one reason we haven't tackled this yet. Maybe there's some interest in a blog post describing the process along with the script I use?
  43. 14 points
    Overview of what Macinabox does. This is a container that is designed to help make installing a macOS KVM Virtual Machine very easy. The VM doesnt run in a docker container but runs as a full fat Unraid KVM VM selectable in the VM tab of the webUI. However your server's hardware must be 'fairly' modern to run a macOS VM. You are going to need a CPU that supports SSE 4.2 & AVX2 for macOS Mojave and above to work. Both Intel and AMD processors are fine to use. To use just select the OS type, vdisk type and size. ( I suggest you use raw disk type) Then let Macinabox make a vdisk for the install, download the recovery media, clover boot-loader and create a vm xml file that is preconfigured to work. (The xml files created will have unique uuids and network mac addresses) Sit back and let the container do its stuff - note - To see the progress of the container, you do this by looking at the log whilst it runs. You will know when it has finished as you will see a message saying to stop then start the array. This container doesn't have a webUI (but clicking on the webUI button of this container will just take you too a video of how to use this container) - - - - - - - - - - - So after the container has done its stuff. Stop the array then start it again and the VM will become visible in the Unraid VM manger. (you will not see it if you dont do this) Click start to start the VM and you will boot into a clover boot-loader. Then press enter to continue to load the recovery media. Goto disk utility and format the vdisk. Close disk utility. Select re-install macOS then sit back and wait until done.. Please be patient when installing as the install speed will depend on your internet connection and how busy the Apple servers are. After installing the VM don't run the container again or else it will overwrite the vdisk with the install on. (I will change this so it cant happen soon) Probably best after installing to remove the container for now just to be safe. edit - I have now added checks to stop the container re downloading install media if run again. It will also check for an existing vdisk and if found not create another and therefore not overwrite it. Same goes for the xml file. However if the container is run again it will download another clover and ovmf files. I have done this so people can easily update clover and ovmf files if needed.
  44. 13 points
    Hey everyone, just thought I'd put this up here after reading a syslog by another forum member and realizing a repeating pattern I've seen here where folks decide to let Plex create temporary files for transcoding on an array or cache device instead of in RAM. Why should I move transcoding into RAM? What do I gain? In short, transcoding is both CPU and IO intensive. Many write operations occur to the storage medium used for transcoding, and when using an SSD specifically, this can cause unnecessary wear and tear that would lead to SSD burnouts happening more quickly than is necessary. By moving transcoding to RAM, you alleviate the burden from your non-volatile storage devices. RAM isn't subject to "burn out" from usage like an SSD would be, and transcoding doesn't need nearly as much space in memory to perform as some would think. How much RAM do I need for this? A single stream of video content transcoded to 12mbps on my test system took up 430MB on the root ram filesystem. The quality of the source content shouldn't matter, only the bitrate to which you are transcoding. In addition, there are other settings you can tweak to transcoding that would impact this number including how many second of transcoding should occur in advance of being played. Bottom line: If you have 4GB or less of total RAM on your system, you may have to tweak settings based on how many different streams you intend on transcoding simultaneously. If you have 8GB or more, you are probably in the safe zone, but obviously the more RAM you use in general, the less space will be available for transcoding. How do I do this There are two tweaks to be made in order to move your transcoding into RAM. One is to the Docker Container you are running and the other is a setting from within the Plex web client itself. Step 1: Changing your Plex Container Properties From within the webGui, click on "Docker" and click on the name of the PlexMediaServer container. From here, add a new volume mapping: /transcode to /tmp Click "Apply" and the container will be started with the new mapping. Step 2: Changing the Plex Media Server to use the new transcode directory Connect to the Plex web interface from a browser (e.g. http://tower:32400/web). From there, click the wrench in the top right corner of the interface to get to settings. Now click the "Server" tab at the top of this page. On the left, you should see a setting called "Transcoder." Clicking on that and then clicking the "Show Advanced" button will reveal the magical setting that let's you redirect the transcoding directory. Type "/transcode" in there and click apply and you're all set. You can tweak some of the other settings if desired to see if that improves your media streaming experience. Thanks for reading and enjoy!
  45. 13 points
    Hi, Guys. This is the first part of a two-part video about setting up a Windows 10 KVM VM in unRAID. (second part in a day or 2 if work lets me !) The first part deals with setting up the VM correctly to be able to use as a 'daily driver'. Then the second part passing through hardware to turn it into a gaming VM. The first part consists of Download a windows 10 iso. Where to Buy a license for windows 10 pro for $20 How to assign resources and correctly pin you CPUs. How to install the virtio drivers including the qxl graphics driver. How to remove or block the windows 10 data mining - phone home - etc with anti beacon. How to install multiple useful programmes with ninite Using Splashtop desktop for good quality remote viewing How to install a virtual sound card to have sound in Splashtop/RDP etc. Using mapped drives and symlinks to get the most out of the array. Windows tweaks for VM compatibility. general tips Hope you find it useful The best way to install and setup a windows 10 vm as a daily driver or a Gaming VM Below is the T second part of a two-part video about setting up a Windows 10 KVM VM in unRAID. The second part deals with passing through hardware and potential problems and solutions showing you how to turn it into a gaming VM. Hope you find it useful.
  46. 13 points
    Welcome to 6.9 beta release development! This initial -beta1 release is exactly the same code set of 6.8.3 except that the Linux kernel has been updated to 5.5.8 (latest stable as of this date). We have only done basic testing on this release; normally we would not release 'beta' code but some users require the hardware support offered by the latest kernel along with security updates added into 6.8. Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Unfortunately none of our out-of-tree drivers will build with this kernel. Some were reverted back to in-tree version, some were omitted. We anticipate that by the time 6.9 enters 'rc' phase, we'll be on the 5.6 kernel and hopefully some of these out-of-tree drivers will be restored. We will be phasing in some new features, improvements, and changes to address certain bugs over the coming weeks - these will all be -beta releases. Don't be surprised if you see a jump in the beta number, some releases will be private. Version 6.9.0-beta1 2020-03-06 Linux kernel: version 5.5.8 igb: in-tree ixgbe: in-tree r8125: in-tree r750: (removed) rr3740a: (removed) tn40xx: (removed)
  47. 13 points
  48. 13 points
    The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
  49. 13 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. This is a bug fix and security update release. Due to another set of processor vulnerabilities called Zombieland, and a set of TCP denial-of-service vulnerabilities called SACK panic, all users are encouraged to update. We are also still trying to track down the source of SQLite Database corruption. It will also be very helpful for those affected by this issue to also upgrade to this release. Version 6.7.1 2019-06-22 Base distro: btrfs-progs: version 5.1.1 curl: version 7.65.1 (CVE-2019-5435, CVE-2019-5436) dhcpcd: version 7.2.2 docker: version 18.09.6 kernel-firmware: version 20190607_1884732 mozilla-firefox: version 66.0.5 openssl: version 1.1.1c openssl-solibs: version 1.1.1c php: version 7.2.19 (removed sqlite support) samba: version 4.9.8 (CVE-2018-16860) xfsprogs: version 5.0.0 Linux kernel: version: 4.19.55 (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091, CVE-2019-11833, CVE-2019-11477, CVE-2019-11478, CVE-2019-11479) intel-microcode: version 20190618 Management: shfs: support FUSE use_ino option Dashboard: added draggable fields in table Dashboard: added custom case image selection Dashboard: enhanced sorting Docker + VM: enhanced sorting Docker: disable button "Update All" instead of hiding it when no updates are available Fix OS update banner overhanging in Auzre / Gray themes Do not allow plugin updates to same version misc style corrections
  50. 13 points
    Ok, move pretty much complete. Quite a pain moving both residence and corp headquarters at the same time! FYI here's a brief history: circa 2005/2006: unRAID born, Sunnyvale, CA 2008-2011: Fort Collins, CO 2012-early 2018: San Diego, CA (incorporated 2015) present: Anaheim, CA (no, we're not in the tree house in Disneyland) Jon, meanwhile, moved within the same city near Chicago. Next up: release 6.5.3-rc2, which brings us up-to-date with linux 4.14 LTS kernel, along with a handful of bug fixes. As soon as that release is promoted to stable we'll get unRAID 6.6 next release out there. Thanks to everyone for your patience during this time.