Jump to content


Popular Content

Showing content with the highest reputation since 09/28/20 in all areas

  1. 12 points
    Back in the saddle ... Sorry for the long delay in publishing this release. Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development. That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come. Thanks to everyone for their help and patience during this time. Cheers, -Tom IMPORTANT: This is Beta software. We recommend running on test servers only! KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel. However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e. Typically looks like this on System Devices page: Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02) The problem is that devices are no longer recognized. There are already bug reports pertaining to this issue: https://bugzilla.kernel.org/show_bug.cgi?id=209177 https://bugzilla.redhat.com/show_bug.cgi?id=1878332 We have reached out to the maintainer to see if a fix can be expedited, however we feel that we can neither revert back to 5.7 kernel nor hold the release due to this issue. We are monitoring and will publish a release with fix asap. ANOTHER known issue: we have added additional btrfs balance options: raid1c3 raid1c4 and modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1). However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile. The solution is to simply run the same balance again. We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default. For now, it's left as-is. THE PRIMARY FOCUS of this release is to put tools in place to help users migrate data off SSD-based pools so that those devices may be re-partitioned if necessary, and then migrate the data back. What are we talking about? For several years now, storage devices managed by Unraid OS are formatted with an "Unraid Standard Partition Layout". This layout has partition 1 starting at offset 32KiB from the start of the device, and extending to the end of the device. (For devices with 512-byte sectors, partition 1 starts in sector 64; for 4096-byte sector size devices, partition 1 starts in sector 8.) This layout achieves maximum storage efficiency and ensures partition 1 starts on a 4096-byte boundary. Through user reports and extensive testing however, we have noted that many modern SSD devices, in particular Samsung EVO, do not perform most efficiently using this partition layout, and the devices seem to write far more than one would expect, and with SSD, one wants to minimize writes to SSD as much as possible. The solution to the "excessive SSD write" issue is to position partition 1 at offset 1MiB from the start of the device instead of at 32KiB. The will both increase performance and decrease writes with affected devices. Do you absolutely need to re-partition your SSD's? Probably not depending on what devices you have. Click on a device from Main, scroll down to Attributes and take a look at Data units written. If this is increasing very rapidly then you probably would benefit by re-partitioning. Note: if you have already (re)Formatted using previous 6.9-beta release, for SSD smaller than 2TiB the proper partition layout will appear like this on the Device Information page: Partition format: MBR: 1MiB-aligned For SSD larger than 2TiB: Partition format: GPT: 1MiB-aligned Here's what's in this release to help facilitate re-partitioning of SSD devices: An Erase button which appears in the Device Information page. The Erase button may be used to erase (delete) content from a volume. A volume is either the content of an unRAID array data disk, or the content of a pool. In the case of an unRAID disk, only that device is erased; in the case of a multiple-device pool ALL devices of the pool are erased. The extent of Erase varies depending on whether the array is Stopped, or Started in Maintenance mode (if started in Normal mode, all volume Erase buttons are disabled). Started/Maintenance mode: in this case the LUKS header (if any) and any file system within partition 1 is erased. The MBR (master boot record) is not erased. Stopped - in this case, unRAID array disk volumes and pool volumes are treated a little differently: unRAID array disk volumes - if Parity and/or Parity2 is valid, then operation proceeds exactly as above, that is, content of only partition 1 is erased but the MBR (master boot record) is left as-is; but, if there is no valid parity, then the MBR is also erased. Pool volumes - partition 1 of all devices within the pool are erased, and then the MBR is also erased. The purpose of erasing the MBR is to permit re-partitioning of the device if required. Upon format, Unraid OS will position partition 1 at 32KiB for HDD devices and at 1MiB for SSD devices. Note that erase does not overwrite the storage content of a device, it simply clears the LUKS header if present (which effectively makes the device unreadable), and file system and MBR signatures. A future Unraid OS release may include the option of overwriting the data. Additional "Mover" capabilities. Since SSD pools are commonly used to store vdisk images, shfs/mover is now aware of: sparse files - when a sparse file is moved from one volume to another, it's sparseness is preserved NoCOW attribute - when a file or directory in a btrfs volume has the NoCOW attribute set, the attribute is preserved when the file or directory is moved to another btrfs volume. Note that btrfs subvolumes are not preserved. A future Unraid OS release may include preservation of btrfs subvolumes. Ok how do I re-partition my SSD pools? Outlined here are two basic methods: "Mover" method - The idea is to use the Mover to copy all data from the pool to a target device in the unRAID array. Then erase all devices of the pool, and reformat. Finally use the Mover to copy all the data back. "Unassign/Re-assign" method - The idea here is, one-by-one, remove a device from a btrfs pool, balance the pool with reduced device count, then re-assign the device back to the pool, and balance pool back to include the device. This works because Unraid OS will re-partition new devices added to an existing btrfs pool. This method is not recommended for a pool with more than 2 devices since the first balance operation may be write-intensive, and writes are what we're trying to minimize. Also it can be tricky to determine if enough free space really exists after removing a device to rebalance the pool. Finally, this method will introduce a time window where your data is on non-redundant storage. No matter which method, if you have absolutely critical data in the pool we strongly recommend making an independent backup first (you are already doing this right?). Mover Method This procedure presumes a multi-device btrfs pool containing one or more cache-only or cache-prefer shares. 1. With array Started, stop any VM's and/or Docker applications which may be accessing the pool you wish to re-partition. Make sure no other external I/O is targeting this pool. 2. For each share on the pool, go to the Share Settings page and make some adjustments: change from cache-only (or cache-prefer) to cache-yes assign an array disk or disks via Include mask to receive the data. If you wish to preserve the NoCOW attribute (Copy-on-write set to No) on files and directories, these disks should be formatted with btrfs. Of course ensure there is enough free space to receive the data. 3. Now go back to Main and click the Move button. This will move the data of each share to the target array disk(s). 4. Verify no data left on the pool, Stop array, click on the pool and then click the Erase button. 5. Start the array and the pool should appear Unformatted - go ahead and Format the pool (this is what will re-write the partition layout). 6. Back to Share Settings page; for each above share: change from cache-yes to cache-prefer 7. On Main page click Move button. This will move data of each share back to the pool. 8. Finally, back to Share Settings page; for each share: change from cache-prefer back to cache-only if desired Unassign/Re-assign Method Stop array and unassign one of the devices from your existing pool; leave device unassigned. Start array. A balance will take place on your existing pool. Let the balance complete. Stop array. Re-assign the device, adding it back to your existing pool. Start array. The added device will get re-partitioned and a balance will start moving data to the new device. Let the balance complete. Repeat steps 1-4 for the other device in your existing pool. Whats happening here is this: At the completion of step 2, btrfs will 'delete' the missing device from the volume and wipe the btrfs signature from it. At the beginning of step 4, Unraid OS will re-partition the new device being added to an existing pool. I don't care about preserving data in the pool. In this case just Stop array, click on the pool and then click Erase. Start array and Format the pool - done. Useful to know: when Linux creates a file system in an SSD device, it will first perform a "blkdiscard" on the entire partition. Similarly, "blkdisard" is initiated on partition 1 on a new device added to an existing btrfs pool. What about array devices? If you have SSD devices in the unRAID array the only way to safely re-partition those devices is to either remove them from the array, or remove parity devices from the array. This is because re-partitioning will invalidate parity. Note also the volume size will be slightly smaller. Version 6.9.0-beta29 2020-09-27 (vs -beta25) Base distro: at-spi2-core: version 2.36.1 bash: version 5.0.018 bridge-utils: version 1.7 brotli: version 1.0.9 btrfs-progs: version 5.6.1 ca-certificates: version 20200630 cifs-utils: version 6.11 cryptsetup: version 2.3.4 curl: version 7.72.0 (CVE-2020-8231) dbus: version 1.12.20 dnsmasq: version 2.82 docker: version 19.03.13 ethtool: version 5.8 fribidi: version 1.0.10 fuse3: version 3.9.3 git: version 2.28.0 glib2: version 2.66.0 build 2 gnutls: version 3.6.15 gtk+3: version 3.24.23 harfbuzz: version 2.7.2 haveged: version 1.9.13 htop: version 3.0.2 iproute2: version 5.8.0 iputils: version 20200821 jasper: version 2.0.21 jemalloc: version 5.2.1 libX11: version 1.6.12 libcap-ng: version 0.8 libevdev: version 1.9.1 libevent: version 2.1.12 libgcrypt: version 1.8.6 libglvnd: version 1.3.2 libgpg-error: version 1.39 libgudev: version 234 libidn: version 1.36 libpsl: version 0.21.1 build 2 librsvg: version 2.50.0 libssh: version 0.9.5 libvirt: version 6.6.0 (CVE-2020-14339) libxkbcommon: version 1.0.1 libzip: version 1.7.3 lmdb: version 0.9.26 logrotate: version 3.17.0 lvm2: version 2.03.10 mc: version 4.8.25 mpfr: version 4.1.0 nano: version 5.2 ncurses: version 6.2_20200801 nginx: version 1.19.1 ntp: version 4.2.8p15 build 2 openssl-solibs: version 1.1.1h openssl: version 1.1.1h p11-kit: version 0.23.21 pango: version 1.46.2 php: version 7.4.10 (CVE-2020-7068) qemu: version 5.1.0 (CVE-2020-10717, CVE-2020-10761) rsync: version 3.2.3 samba: version 4.12.7 (CVE-2020-1472) sqlite: version 3.33.0 sudo: version 1.9.3 sysvinit-scripts: version 2.1 build 35 sysvinit: version 2.97 ttyd: version 1.6.1 util-linux: version 2.36 wireguard-tools: version 1.0.20200827 xev: version 1.2.4 xf86-video-vesa: version 2.5.0 xfsprogs: version 5.8.0 xorg-server: version 1.20.9 build 3 xterm: version 360 xxHash: version 0.8.0 Linux kernel: version 5.8.12 kernel-firmware: version kernel-firmware-20200921_49c4ff5 oot: Realtek r8152: version 2.13.0 oot: Tehuti tn40xx: version Management: btrfs: include 'discard=async' mount option emhttpd: avoid using remount to set additional mount options emhttpd: added wipefs function (webgui 'Erase' button) shfs: move: support spares files shfs: move: preserve ioctl_iflags when moving between same file system types smb: remove setting 'aio' options in smb.conf, use samba defaults webgui: Update noVNC to v1.2.0 webgui: Docker: more intuitive handling of images webgui: VMs: more intuitive handling of image selection webgui: VMs: Fixed: rare cases vdisk defaults to Auto when it should be Manual webgui: VMs: Fixed: Adding NICs or VirtFS mounts to a VM is limited webgui: VM manager: new setting "Network Model" webgui: Added new setting "Enable user share assignment" to cache pool webgui: Dashboard: style adjustment for server icon webgui: Update jGrowl to version 1.4.7 webgui: Fix ' appearing webgui: VM Manager: add 'virtio-win-0.1.189-1' to VirtIO-ISOs list webgui: Prevent bonded nics from being bound to vfio-pci too webgui: better handling of multiple nics with vfio-pci webgui: Suppress WG on Dashboard if no tunnels defined webgui: Suppress Autofan link on Dashboard if plugin not installed webgui: Detect invalid session and logout current tab webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication webgui: Fix notifications continually reappearing webgui: Support links on notifications webgui: Add raid1c3 and raid1c4 btrfs pool balance options. webgui: For raid6 btrfs pool data profile use raid1c3 metadata profile. webgui: Permit file system configuration when array Started for Unmountable volumes. webgui: Fix not able to change parity check schedule if no cache pool present webgui: Disallow "?" in share names webgui: Add customizable timeout when stopping containers
  2. 10 points
    Changes vs. 6.9.0-beta29 include: Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file: /etc/modprobe.d/mpt3sas-workaround.conf which contains this line: options mpt3sas max_queue_depth=10000 When the mpt3sas module is loaded at boot, that option will be specified. If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it. Likewise, if you manually load the module via 'go' file, can also remove it. When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround. Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's. A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver). For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. Version 6.9.0-beta30 2020-10-05 (vs -beta29) Base distro: libvirt: version 6.5.0 [revert from version 6.6.0] php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069) Linux kernel: version 5.8.13 ast: removed blacklisting from /etc/modprobe.d mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000" Management: at: suppress session open/close syslog messages emhttpd: correct 'Erase' logic for unRAID array devices emhtppd: wipefs encrypted device removed from multi-device pool emhttpd: yet another btrfs 'free/used' calculation method webGUI: Update statuscheck webGUI: Fix dockerupdate.php warnings
  3. 8 points
    Something really cool happened. I woke up this morning and my profile had this green wording on it. I just officially became the newest UNRAID "Community Developer". I just wanted to say thanks to UNRAID and @SpencerJ for bestowing this honor. I am pleased to be able to add to the community overall. You guys rock! Honestly, I have been a part of many forums over the years, and I have never seen a community so eager to help, never condescending, always maintaining professional decorum, and overall just a great place to be. I'm proud to be a part of it!
  4. 7 points
    You have just described how almost all software functions 🤣
  5. 7 points
    The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices. There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.
  6. 6 points
    Sure is. When Big Sur is released officially a new Macinabox will be out. With a bunch of new features and choice between opencore and clover bootloaders.
  7. 6 points
    Should be noted that right now, the only SAS chipsets definitely affected is the SAS2116 ie: the 9201-16e. My controller running SAS2008 (Dell H200 cross-flashed) is completely unaffected.
  8. 5 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottum of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with nVidia & DVB drivers (currently DigitalDevices, LibreElec, XBOX One USB Adapter and TBS OpenSource drivers selectable) optionally you can also enable ZFS, iSCSI Target, Intel iGPU and Mellanox Firmware Tools (Mellanox only for 6.9.0 and up) support. nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. You also can set the ZFS version from 'latest' to 'master' to build from the latest branch from Github if you are using the 6.9.0 repo of the container. iSCSI Target: Please note that this feature is at the time command line only! The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's. ATTENTION: Always mount a block volume with the path: '/dev/disk/by-id/...' (otherwise you risk data loss)! For instructions on how to create a target read the manuals: Manual Block Volume.txt Manual FileIO Volume.txt ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). Plugin now available (will show all informations about the images/drivers/modules that it can get): https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Or simply download it through the CA App This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Beta Build (the following is a tutorial for v6.9.0): Upgrade to your preferred stock beta version first, reboot and then start building (to avoid problems)! Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' (now you can also put in for example: 'beta25' without quotes to automaticaly download Unraid v6.9.0-beta25 and the other steps are not required anymore) Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) (You can get the files bzimage bzroot bzmodules bzfirmware also from the Beta zip file from Limetch or better you first upgrade to that Beta version and then copying over the files from your /boot directory to the directory created in step 5 to avoid problems) !!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!! IRC: irc.minenet.at:6697 Here you can download the prebuilt images: Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 450.66) Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 450.66 | ZFS version: 0.8.4) Unraid Custom DVB builtin v6.8.3: Download (LE driver: 1.4.0) Unraid Custom ZFS builtin v6.8.3: Download (ZFS version: 0.8.4) Unraid Custom iSCSI builtin v6.8.3: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66) Unraid Custom nVidia & DVB builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | ZFS Build from 'master' branch on Github on 2020.08.19) Unraid Custom ZFS builtin v6.9.0 beta25: Download (ZFS Build from 'master' branch on Github on 2020.07.12) Unraid Custom iSCSI builtin v6.9.0 beta25: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04) Unraid Custom nVidia & DVB builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | ZFS v2.0.0-rc2) Unraid Custom ZFS builtin v6.9.0 beta29: Download (ZFS v2.0.0-rc2) Unraid Custom iSCSI builtin v6.9.0 beta29: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta30 Download (nVidia driver: 455.28) Unraid Custom nVidia & DVB builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | ZFS 0.8.5) Unraid Custom ZFS builtin v6.9.0 beta30: Download (ZFS 0.8.5) Unraid Custom iSCSI builtin v6.9.0 beta30: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt If you like my work, please consider making a donation
  9. 5 points
    This blog is a guide on how to securely back up one Unraid server to another geographically separated Unraid server using rsync and Wireguard by @spx404. If you have questions, comments or just want to say hey, post them here! https://unraid.net/blog/unraid-server-to-server-backups-with-rsync-and-wireguard
  10. 5 points
    I'm using Unraid for a while now and collected some experience to boost the SMB transfer speeds: 1.) Choose the right CPU The most important part is to understand that SMB is single-threaded. This means SMB uses only one CPU core to transfer a file. This is valid for the server and the client. Usually this is not a problem as SMB does not fully utilize a CPU core (except of real low powered CPUs). But Unraid adds, because of the ability to split shares across multiple disks, an additional process called SHFS and its load raises proportional to the transfer speed, which could overload your CPU core. So the most important part is, to choose the right CPU. At the moment I'm using an i3-8100 which has 4 cores and 2257 single thread passmark points: And since I have this single thread power I'm able to use the full bandwith of my 10G network adapter which was not possible with my previous Intel Atom C3758 (857 points) although both have comparable total performance. I even was not able to reach 1G speeds while a parallel Windows Backup was running (see next section to bypass this limitation). Now I'm able to transfer thousands of small files and parallely transfer a huge file with 250 MB/s. With this experience I suggest a CPU that has around 1400 single thread passmark points to fully utilize a 1G ethernet port. As an example: The smallest CPU I would suggest for Unraid is an Intel Pentium Silver J5040. P.S. Passmark has a list sorted by single thread performance for desktop CPUs and server CPUs. 2.) Bypass single-thread limitation The single-thread limitation of SMB and SHFS can be bypassed through opening multiple connections to your server. This means connecting to "different" servers. The easiest way to accomplish that, is to use the ip-address of your server as a "second" server while using the same user login: \\tower\sharename -> best option for user access through file explorer as it is automatically displayed \\\sharename -> best option for backup softwares, you could map it as a network drive If you need more connections, you can add multiple entries to your windows hosts file (Win+R and execute "notepad c:\windows\system32\drivers\etc\hosts"): tower2 tower3 Results If you now download a file from your Unraid server through \\ while a backup is running on \\tower, it will reach the maximum speed while a download from \\tower is massively throttled: 3.) Bypass Unraid's SHFS process If you enable access directly to the cache disk and upload a file to //tower/cache, this will bypass the SHFS process. Beware: Do not move/copy files between the cache disk and shares as this could cause data loss! The eligible user account will be able to see all cached files, even those from other users. Temporary Solution or "For Admins only" As Admin or for a short test you could enable "disk shares" under Settings -> Global Share Settings: By that all users can access all array and cache disks as SMB shares. As you don't want that, your first step is to click on each Disk in the WebGUI > Shares and forbid user access, except for the cache disk, which gets read/write access only for your "admin" account. Beware: Do not create folders in the root of the cache disk as this will create new SMB Shares Safer Permanent Solution Use this explanation. Results In this thread you can see the huge difference between copying to a cached share or copying directly to the cache disk. 4.) Enable SMB Multichannel + RSS SMB Multichannel is a feature of SMB3 that allows splitting file transfers across multiple NICs (Multichannel) and multiple CPU Cores (RSS) since Windows 8. This will raise your throughput depending on your amount of NICs, NIC bandwidth, CPU and used settings: This feature is experimental SMB Multichannel is considered experimental since its release with Samba 4.4. The main bug for this state is resolved in Samba 4.13. The Samba developers plan to resolve all bugs with 4.14. Unraid 6.8.3 contains Samba 4.11. This means you use Multichannel on your own risk! Multichannel for Multiple NICs Lets say your mainboard has four 1G NICs and your Client has a 2.5G NIC. Without Multichannel the transfer speed is limited to 1G (117,5 MByte/s). But if you enable Multichannel it will split the file transfer across the four 1G NICs boosting your transfer speed to 2.5G (294 MByte/s): Additionally it uses multiple CPU Cores which is useful to avoid overloading smaller CPUs. To enable Multichannel you need to open the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf And add the following to it: server multi channel support = yes Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Then restart the Samba service with this command: samba restart Eventually you need to reboot your Windows Client, but finally its enabled and should work. Multichannel + RSS for Single and Multiple NICs But what happens if you're server has only one NIC. Now Multichannel is not able to split something, but it has a sub-feature called RSS which is able to split file transfers across multiple CPU cores with a single NIC: Of course this feature works with multiple NICs: And this is important, because it creates multiple single-threaded SMB processes and SHFS processes which are now load balanced across all CPU cores, instead of overloading only a single core. So if your server has slow SMB file transfers while your overall CPU load in the Unraid WebGUI Dashboard is not really high, enabling RSS will boost your SMB file transfer to the maximum! But it requires RSS capability on both sides. You need to check your servers NIC by opening the Unraid Webterminal and entering this command (could be obsolete with Samba 4.13 as they built-in an RSS autodetection ) egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 Now you can check your Windows 8 / Windows 10 client by opening Powershell as Admin and enter this command: Get-SmbClientNetworkInterface It must return "True" for "RSS Capable": Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name --------------- ----------- ------------ ----- ----------- ------------- 11 True False 10 Gbps {} Ethernet 3 Now, after you are sure that RSS is supported on your server, you can enable Multichannel + RSS by opening the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf Add the following and change to your Unraid servers IP and speed to "10000000000" for 10G adapter or to "1000000000" for a 1G adapter: server multi channel support = yes interfaces = ";capability=RSS,speed=10000000000" If you are using multiple NICs the syntax looks like this (add RSS capability only for supporting NICs!): interfaces = ";capability=RSS,speed=10000000000" ";capability=RSS,speed=10000000000" Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Now restart the SMB service: samba restart Does it work? After rebooting your Windows Client (seems to be a must), download a file from your server (so connection is established) and now you can check if Multichannel + RSS works by opening Windows Powershell as Admin and enter this command: Get-SmbMultichannelConnection -IncludeNotSelected It must return a line similar to this (a returned line = Multichannel works) and if you want to benefit from RSS then "Client RSS Cabable" must be "True": Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 11 13 True False If you are interested in test results, look here. 5.) smb.conf Settings Tuning At the moment I'm doing intense tests with different SMB config settings found on different websites: https://wiki.samba.org/index.php/Performance_Tuning https://wiki.samba.org/index.php/Linux_Performance https://wiki.samba.org/index.php/Server-Side_Copy https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/speed.html https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html https://lists.samba.org/archive/samba-technical/attachments/20140519/642160aa/attachment.pdf https://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf https://www.samba.org/samba/docs/current/man-html/ (search for "vfs") https://lists.samba.org/archive/samba/2016-September/202697.html https://codeinsecurity.wordpress.com/2020/05/18/setting-up-smb-multi-channel-between-freenas-or-any-bsd-linux-and-windows-for-20gbps-transfers/ https://www.snia.org/sites/default/files/SDC/2019/presentations/SMB/Metzmacher_Stefan_Samba_Async_VFS_Future.pdf https://www.heise.de/newsticker/meldung/Samba-4-12-beschleunigt-Verschluesselung-und-Datentransfer-4677717.html I will post my results after all tests have been finished. By now I would say it does not really influence the performance as recent Samba versions are already optimized, but we will see. 6.) Choose a proper SSD for your cache You could use Unraid without an SSD, but if you want fast SMB transfers an SSD is absolutely required. Else you are limted to slow parity writes and/or through your slow HDD. But many SSDs on the market are not "compatible" for using it as an Unraid SSD Cache. DRAM Many cheap models do not have a DRAM Cache. This small buffer is used to collect very small files or random writes before they are finally written to the SSD and/or is used to have a high speed area for the file mapping-table. In Short, you need DRAM Cache in your SSD. No exception. SLC Cache While DRAM is only absent in cheap SSDs, SLC Cache can miss in different price ranges. Some cheap models use a small SLC cache to "fake" their technical data. Some mid-range models use a big SLC Cache to raise durability and speed if installed in a client pc. And some high-end models do not have an SLC Cache, as their flash cells are fast enough without it. Finally you are not interested in SLC Cache. You are only interested in continuous write speeds (see "Verify Continuous Writing Speed") Determine the Required Writing Speed But before you are able to select the right SSD model you need to determine your minimum required transfer speed. This should be simple. How many ethernet ports do you want to use or do you plan to install a faster network adapter? Lets say you have two 1G ports and plan to install a 5G card. With SMB Multichannel its possible to use them in sum and as you plan to install a 10G card in your client you could use 7G in total. Now we can calculate: 7G * 117.5 MByte/s (real throughput per 1G ethernet) = 822 MByte/s and by that we have two options: buy one M.2 NVMe (assuming your motherboard has such a slot) with a minimum writing speed of 800 MByte/s buy two or more SATA SSDs and use them in a RAID0, each with a minimum writing speed of 400 MByte/s Verify Continuous Writing Speed of the SSD As an existing "SLC Cache" hides the real transfer speed you need to invest some time to check if your desired SSD model has an SLC cache and how much the SSD throttles after its full. A solution could be to search for "review slc cache" in combination with the model name. Using the image search could be helpful as well (maybe you see a graph with a falling line). If you do not find anything, use Youtube. Many people out there test their new ssd by simply copying a huge amount of files on it. Note: CrystalDiskMark, AS SSD, etc Benchmarks are useless as they only test a really small amount of data (which fits into the fast cache). Durability You could look for the "TBW" value of the SSD, but finally you won't be able to kill the SSD inside the warranty as long your very first filling of your unraid server is done without the SSD Cache. As an example a 1TB Samsung 970 EVO has a TBW of 600 and if your server has a total size of 100TB you would waste 100TBW on your first fill for nothing. If you plan to use Plex, think about using the RAM as your transcoding storage which would save a huge amount of writes to your SSD. Conclusion: Optimize your writings instead of buying an expensive SSD. NAS SSD Do not buy "special" NAS SSDs. They do not offer any benefits compared to the high-end consumer models, but cost more. 7.) More RAM More RAM means more caching and as RAM is even faster than the fastest SSDs, this adds additional boost to your SMB transfers. I recommend installing two identical (or more depening on the amount of slots) RAM modules to benefit from "Dual Channel" speeds. RAM frequency is not as important as RAM size. Read Cache for Downloads If you download a file twice, the second download does not read the file from your disk, instead it uses your RAM only. The same happens if you're loading covers of your MP3s or Movies or if Windows is generating thumbnails of your photo collection. More RAM means more files in your cache. The read cache uses by default 100% of your free RAM. Write Cache for Uploads Linux uses by default 20% of your free RAM to cache writes, before they are written to the disk. You can use the Tips and Tweaks Plugin to change this value or add this to your Go file (with the Config Editor Plugin) sysctl vm.dirty_ratio=20 But before changing this value, you need to be sure to understand the consequences: Never use your NAS without an UPS if you use write caching as this could cause huge data loss! The bigger the write cache, the smaller the read cache (so using 100% of your RAM as write cache is not a good idea!) If you upload files to your server, they are 30 seconds later written to your disk (vm.dirty_expire_centisecs) Without SSD Cache: If your upload size is generally higher than your write cache size, it starts to cleanup the cache and in parallel write the transfer to your HDD(s) which could result in slow SMB transfers. Either you raise your cache size, so its never filled up, or you consider totally disabling the write cache. With SSD Cache: SSDs love parallel transfers (read #6 of this Guide), so a huge writing cache or even full cache is not a problem. But which dirty_ratio value should you set? This is something you need to determine by yourself as its completely individual: At first you need to think about the highest RAM usage that is possible. Like active VMs, Ramdisks, Docker containers, etc. By that you get the smallest amount of free RAM of your server: Total RAM size - Reserved RAM through VMs - Used RAM through Docker Containers - Ramdisks = Free RAM Now the harder part: Determine how much RAM is needed for your read cache. Do not forget that VMs, Docker Containers, Processes etc load files from disks and they are all cached as well. I thought about this and came to this command that counts hot files: find /mnt/cache -type f -amin -86400 ! -size +1G -exec du -bc {} + | grep total$ | cut -f1 | awk '{ total += $1 }; END { print total }' | numfmt --to=iec-i --suffix=B It counts the size of all files on your SSD cache that are accessed in the last 24 hours (86400 seconds) The maximum file size is 1GiB to exclude VM images, docker containers, etc This works only if you hopefully use your cache for your hot shares like appdata, system, etc Of course you could repeat this command on several days to check how it fluctuates. This command must be executed after the mover has finished its work This command isn't perfect as it does not count hot files inside a VM image Now we can calculate: 100 / Total RAM x (Free RAM - Command Result) = vm.dirty_ratio If your calculated "vm.dirty_ratio" is lower than 5% (or even negative), you should lower it to 5 and buy more RAM. between 5% and 20%, set it accordingly, but you should consider buying more RAM. between 20% and 90%, set it accordingly If your calculated "vm.dirty_ratio" is higher than 90%, you are probably not using your SSD cache for hot shares (as you should) or your RAM is huge as hell (congratulation ^^). I suggest not to set a value higher than 90. Of course you need to recalcuate this value if you add more VMs or Docker Containers. #8 Disable haveged Unraid does not trust the randomness of linux and uses haveged instead. By that all encryptions processes on the server use haveged which produces extra load. If you don't need it, disable it through your Go file (CA Config Editor) as follows: # ------------------------------------------------- # disable haveged as we trust /dev/random # https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452 # ------------------------------------------------- /etc/rc.d/rc.haveged stop
  11. 5 points
    can the stable ver 6.8.3 get the latest nvidia drivers? PLEX betas (and likely next stable) now require 450.66 or later drivers to keep nvenc operational -- hw transcoding is broken on latest versions because of this. i prefer to not run betas, so essentially i am stuck on a previous working plex version ( https://www.reddit.com/r/PleX/comments/j0bzu1/it_was_working_before_but_now_im_having_issues/ https://forums.plex.tv/t/help-fixing-my-broken-nvidia-nvenc-driver-config-ubuntu-20-04/637854 https://forums.plex.tv/t/nvenc-hardware-encoding-broken-on-1-20-2-3370/637925
  12. 5 points
    OK Testing is now over, i am satisfied that PIA Next-Gen network using OpenVPN is working well enough to push this to production, so the changes have now been included in the 'latest' tagged image, if you have been using the 'test' tagged image then please drop ':test' from the repository name and click on apply to pick up the latest again. If you want to switch from current to next-gen then please generate a new ovpn file using the following procedure:- Note:- The new image will still support current (or legacy) PIA network for now, so you can use either, dictated to by the ovpn file.
  13. 5 points
    Added prebuilt images for Unriad v6.9.0beta29 to the bottom of the first post. Prebuilt images include: nVidia nVidia & DVB nVidia & ZFS ZFS iSCSI
  14. 4 points
    This was an idea of @frodr. After several testings I came to the following script which: Caclulates how many Video files (small parts of them) will fit into 50% of the free RAM (amount can be changed) Obtains the X recent Movies / Episodes (depending on the used path) Preloads 60MB of the Video file leader and 1MB of the ending into the RAM Preloads subtitle files that belong to the preloaded Video file Now, if your disks are sleeping and you start a Movie or Episode through Plex, the client will download the Video parts from RAM and while the buffer is emptied, the HDD spins up and the buffer fills up again. This means all preloaded Movies / Episodes will start directly without delay. Notes: It does not reserve any RAM, so your RAM stays fully available RAM preloading is not permanent and will be overwritten through server uploads/downloads or processes over the time. I suggest to execute this script 1x per day (only missing Video files will be touched) For best reliability execute the script AFTER all your backup scripts are completed (as for an example rsync can use the complete available RAM while syncing) If you suffer from buffering at the beginning of a Movie / Episode, try to raise "video_min_size" All preloaded Videos can be found in the script's log (CA User Scripts) Script #!/bin/bash # ##################################### # Script: Plex Preloader v0.9 # Description: Preloads the recent video files of a specific path into the RAM to bypass HDD spinup latency # Author: Marc Gutt # # Changelog: # 0.9 # - Preloads only subtitle files that belong to preloaded video files # 0.8 # - Bug fix: In some situations video files were skipped instead of preloading them # 0.7 # - Unraid dashboard notification added # - Removed benchmark for subtitle preloading # 0.6 # - multiple video path support # 0.5 # - replaced the word "movie" against "video" as this script can be used for TV Shows as well # - reduced preload_tail_size to 1MB # 0.4 # - precleaning cache is now optional # 0.3 # - the read cache is cleaned before preloading starts # 0.2 # - preloading time is measured # 0.1 # - first release # # ######### Settings ################## video_paths=( "/mnt/user/Movies/" "/mnt/user/TV/" ) video_min_size="2000MB" # 2GB, to exclude bonus content preload_head_size="60MB" # 60MB, raise this value if your video buffers after ~5 seconds preload_tail_size="1MB" # 10MB, should be sufficient even for 4K video_ext='avi|mkv|mov|mp4|mpeg' # https://support.plex.tv/articles/203824396-what-media-formats-are-supported/ sub_ext='srt|smi|ssa|ass|vtt' # https://support.plex.tv/articles/200471133-adding-local-subtitles-to-your-media/#toc-1 free_ram_usage_percent=50 preclean_cache=0 notification=1 # ##################################### # # ######### Script #################### # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi; trap 'rmdir "/tmp/${0///}"' EXIT; # check user settings video_min_size="${video_min_size//[!0-9.]/}" # float filtering https://stackoverflow.com/a/19724571/318765 video_min_size=$(awk "BEGIN { print $video_min_size*1000000}") # convert MB to Bytes preload_head_size="${preload_head_size//[!0-9.]/}" preload_head_size=$(awk "BEGIN { print $preload_head_size*1000000}") preload_tail_size="${preload_tail_size//[!0-9.]/}" preload_tail_size=$(awk "BEGIN { print $preload_tail_size*1000000}") # clean the read cache if [ "$preclean_cache" = "1" ]; then sync; echo 1 > /proc/sys/vm/drop_caches fi # preload preloaded=0 skipped=0 preload_total_size=$(($preload_head_size + $preload_tail_size)) free_ram=$(free -b | awk '/^Mem:/{print $7}') free_ram=$(($free_ram / 100 * $free_ram_usage_percent)) echo "Available RAM in Bytes: $free_ram" preload_amount=$(($free_ram / $preload_total_size)) echo "Amount of Videos that can be preloaded: $preload_amount" # fetch video files while IFS= read -r -d '' file; do if [[ $preload_amount -le 0 ]]; then break; fi size=$(stat -c%s "$file") if [ "$size" -gt "$video_min_size" ]; then TIMEFORMAT=%R benchmark=$(time ( head -c $preload_head_size "$file" ) 2>&1 1>/dev/null ) echo "Preload $file (${benchmark}s)" if awk 'BEGIN {exit !('$benchmark' >= '0.150')}'; then preloaded=$((preloaded + 1)) else skipped=$((skipped + 1)) fi tail -c $preload_tail_size "$file" > /dev/null preload_amount=$(($preload_amount - 1)) video_path=$(dirname "$file") # fetch subtitle files find "$video_path" -regextype posix-extended -regex ".*\.($sub_ext)" -print0 | while IFS= read -r -d '' file; do echo "Preload $file" cat "$file" >/dev/null done fi done < <(find "${video_paths[@]}" -regextype posix-extended -regex ".*\.($video_ext)" -printf "%T@ %p\n" | sort -nr | cut -f2- -d" " | tr '\n' '\0') # notification if [[ $preloaded -eq 0 ]] && [[ $skipped -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i alert -s "Plex Preloader failed!" -d "No video file has been preloaded (wrong path?)!" elif [ "$notification" = "1" ]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Plex Preloader has finished" -d "$preloaded preloaded (from Disk) / $skipped skipped (already in RAM)" fi
  15. 4 points
    Hi all It's not often I write in forums or express my opinions. But today is a day I feel I must. I have been using unraid for quite a few years now. I believe it's been over 5 years. But for the life of me cannot find the email from when I bought my licence. When I embarked on the journey of wanting raid like features but without actually using a traditional raid. I looked over loads of options and even bought and tried FlexRAID and thought it was working well until I had a drive failure that it did not warn me of and data lost because it refused to recover. Plus a few other issues but I do not like to badmouth other products. So I started looking again and saw the various offerings but none of them was really user friendly for a noob like me. Then I happened to stumble onto unraid and thought I would give the limited (I believe it was 3 hdd) trial a go. Oh it was everything I ever needed in a storage solution simple to use, can upgrade hdds down the line without a painful process etc. A short while later disaster hit and and I suffered a failed 3TB seagate hdd (I know plenty of us have suffered losses due to seagate). But I still had access to all my data until a new drive came. Popped in the new drive and it rebuilt and away I went again. Then one day I thought I really want to upgrade my array to handle 6TB drives and was not looking forward to it. Found it was just easy to do again. rebuilds happened and once again Unraid is just working, Last week another seagate 3tb failed (surprised it lasted so long) And as I am writing this a 6TB hdd has replaced it and is 1.5% into the rebuild. So really thank you Lime Technology for actually selling a product that really works well and kept active development adding items. (also adding more drives to the plus licence). Well done Kevin EDIT Just found my reg date Sat 04 Oct 2014 09:10:13 PM BST
  16. 4 points
  17. 4 points
    I've been trialling Nextcloud as a cloud backup service for myself, and if successful, my family which live remotely. I'm using Duplicati to perform the backups, but that's not the point of this guide. The point is that when I backup files to Nextcloud, the Docker Image slowly fills up. I've never had it reach a point where the Image reaches 100%, but it's probably not a fun time. After searching the internet for hours trying to find anything, I eventually figured out what was required. For context, I'm running Nextcloud behind a Reverse Proxy, for which I'm using Swag (Let's Encrypt). Through trial and error, the behaviour I observed is that when uploading files (via WebDAV in my case), they get put in the /tmp folder of Swag. Once they are fully uploaded, they are copied across to Nextcloud's /temp directory. Therefore, both paths need to be added as Bind mounts for this to work. What To Do Head over to the Docker tab, and edit Nextcloud and add a new Path variable: Name: Temp Container Path: /tmp Host Path: /mnt/user/appdata/nextcloud/temp Next edit the Swag (or Let's Encrypt) container, and add a new Path variable: Name: Temp Container Path: /var/lib/nginx/tmp Host Path: /mnt/user/appdata/swag/temp And that's it! Really simple fix, but no one seemed to have an answer. Now when I backup my files, the Docker image no longer fills up.
  18. 4 points
    This is great! I've been coming back to this forum multiple times a day waiting for an update. Just today I'd decided to clear & reformat my cache as I don't really trust btrfs, and this update has come just at the right time for me to upgrade with an empty cache and reconfigure things nicely. Hope that the "various non-work-related challenges" have all been resolved and you and your families are safe & well. Thanks for providing such a great system. Will be moving to this in the next day.
  19. 3 points
    Unraid is a cut down version of slackware, specifically stripped of everything that's not needed, because it loads into and runs entirely in RAM. We don't have the luxury of just slapping every single driver and support package into it, you would end up with a minimum 16GB or 32GB RAM spec. Before VM and docker container support was added, you could have all the NAS functionality with 1GB of RAM. Now, 4GB is the practical bare minimum for NAS, even if you don't use VM's and containers, and 8GB is still cramped. Adding support for a single adapter that works well in slackware, providing the manufacturer keeps up with linux kernel development shouldn't be an issue. That way we can tell people if you want wifi, here is a list of cards using that driver that are supported. It's the blanket statement of "lets support wifi" that doesn't work. BTW, even if we do get that golden ticket wifi chip support from the manufacturer and Unraid supports it perfectly, the forums will still be bombarded with performance issues because either their router sucks or their machine isn't in the zone of decent coverage, or their neighbours cause interference at certain times of day, etc. Bottom line, wifi on a server just isn't ready for primetime yet. Desktop daily drivers, fine. 24/7/365 servers with constant activity from friends and family, no. It's much easier support wise to require wired. If the application truly has to have wireless, there are plenty of ways to bridge a wireless signal and convert it to wired. A pair of commercial wifi access points with a dedicated backhaul channel works fine, that's what I use in a couple locations.
  20. 3 points
    But helium ALWAYS goes up. Right?🤣
  21. 3 points
    Also folgendes; Ich habe jetzt ein komplettes VLAN nur für Docker Container und unRAID Server aufgebaut. Zum einen bin ich der Meinung dass es sowieso nicht schaden kann wenn jeder Container seine eigene IP-Adresse hat, (macht auch die Portauswahl auf dauer leichter) und zum anderen kann ich somit vorrübergehend der IP-Adresse von z.B. Nextcloud via DNS den Namen zuweisen den ich haben will. Noch dazu, ist es relativ übersichtlich wenn ich jetzt im DHCP nachschaue habe ich ein VLAN in welchem die jeweiligen Applikationen enthalten sind. Das werde ich auch für die Zukunft so lassen - denk ich mal. Das mit dem DNS ist jetzt nur vorrübergehend bis meine Fritz!Box 7590 da ist! Nochmal vielen lieben Dank an alle die mir hier geholfen haben und Tipps gegeben haben! Ich bin gern hier im Forum unterwegs und werde auch in Zukunft versuchen anderen unRAIDern mit meinem Wissen so gut wie es geht unter die Arme zu greifen. Wünsche euch allen ein schönes Wochenende - vielleicht ist es bei euch ja nicht so verregnet wie bei uns in Salzburg 😒 Beste Grüße, Dominic
  22. 3 points
    @Valerio found this out first, but never received an answer. Today I found it out, too. But this is present since 2019 (or even longer). I would say it's a bug, as: it prevents HDD/SSD spindown/sleep (depending on the location of docker.img) it wears out the SSD in the long run (if docker.img is located here) - see this bug, too. it prevents reaching CPU's deep sleep states What happens: /var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content /var/lib/docker/containers/*/config.v2.json is updated every 5 seconds with the same content except of some timestamps (which shouldn't be part of a config file I think) Which docker containers: verified are Plex (Original) and PiHole, but maybe this is a general behaviour As an example the source of hostconfig.json which was updated yesterday 17280x times with the same content: find /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44 -ls -name hostconfig.json -exec cat {} \; 2678289 4 -rw-r--r-- 1 root root 1725 Oct 8 13:46 /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/hostconfig.json { "Binds":[ "/mnt/user/tv:/tv:ro", "/mnt/cache/appdata/Plex-Media-Server:/config:rw", "/mnt/cache/appdata/Plex-Transcode:/transcode:rw", "/mnt/user/movie:/movie:ro" ], "ContainerIDFile":"", "LogConfig":{ "Type":"json-file", "Config":{ "max-file":"1", "max-size":"50m" } }, "NetworkMode":"host", "PortBindings":{ }, "RestartPolicy":{ "Name":"no", "MaximumRetryCount":0 }, "AutoRemove":false, "VolumeDriver":"", "VolumesFrom":null, "CapAdd":null, "CapDrop":null, "Capabilities":null, "Dns":[ ], "DnsOptions":[ ], "DnsSearch":[ ], "ExtraHosts":null, "GroupAdd":null, "IpcMode":"private", "Cgroup":"", "Links":null, "OomScoreAdj":0, "PidMode":"", "Privileged":false, "PublishAllPorts":false, "ReadonlyRootfs":false, "SecurityOpt":null, "UTSMode":"", "UsernsMode":"", "ShmSize":67108864, "Runtime":"runc", "ConsoleSize":[ 0, 0 ], "Isolation":"", "CpuShares":0, "Memory":0, "NanoCpus":0, "CgroupParent":"", "BlkioWeight":0, "BlkioWeightDevice":[ ], "BlkioDeviceReadBps":null, "BlkioDeviceWriteBps":null, "BlkioDeviceReadIOps":null, "BlkioDeviceWriteIOps":null, "CpuPeriod":0, "CpuQuota":0, "CpuRealtimePeriod":0, "CpuRealtimeRuntime":0, "CpusetCpus":"", "CpusetMems":"", "Devices":[ { "PathOnHost":"/dev/dri", "PathInContainer":"/dev/dri", "CgroupPermissions":"rwm" } ], "DeviceCgroupRules":null, "DeviceRequests":null, "KernelMemory":0, "KernelMemoryTCP":0, "MemoryReservation":0, "MemorySwap":0, "MemorySwappiness":null, "OomKillDisable":false, "PidsLimit":null, "Ulimits":null, "CpuCount":0, "CpuPercent":0, "IOMaximumIOps":0, "IOMaximumBandwidth":0, "MaskedPaths":[ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths":[ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }
  23. 3 points
    <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> this tag is what you need to change
  24. 3 points
    The Ultimate UNRAID Dashboard Version 1.4 is here! This is a MASSIVE 😁 update adding many new powerful features, panels, and hundreds of improvements. The main goal of this release is to increase usability and simplify the dashboard so more people can modify it without getting lost in REGEX and having to ask for support as often. As a result, the most complex queries have been rewritten in a way that is clear and transparent, while still remaining just as powerful. Finally, I have added requested features and threw in some new bells and whistles that I thought you guys would like. As always, I'm here if you need me. ENJOY! Highlights: Keep it Simple - Added User Transparency Back Into Dashboard by Removing REGEX on Certain Panels This Will Make it Extremely Easy to Customize the Dashboard to Your Specific Needs/Requirements You Can Now See Exactly How Certain Panels are Derived and Making Modifications is Self Explanatory This Will Also Make Support MUCH Easier For Everyone! Multi-Host Support Change the Host Drop Down Variable and Monitor Another Host Instantly Added the Host Variable to Every Single Panel The Entire Dashboard Can Now Monitor Any Host in Real Time With a Single Variable Change Via Drop Down Menu! Initial Support For Non Server Hardware Initial Support For Sensors Plugin to Monitor Non Server Hardware (Only Used If IPMI Is NOT Supported on Your Hardware) Requires New "sensors" Plugin (See Dependencies Section on Post #1) Added Template Sensor Queries (Disabled By Default) You Will Need to Modify These Example Queries As Required For Your Non Server Hardware These are Just Building Blocks to Help Those Who Cannot Use IPMI Please See the Forum Topic For Detailed Help! Initial Support For Unassigned Drives Added Ability For Unassigned Drives Via 2 Variables (Serial and Path) Added Unassigned Drives to Panels Throughout Dashboard Where Applicable Default Dashboard Comes With Only 1 Unassigned Path Variable You Will Need to Add Additional Path Variables to Include/Exclude Multiple Unassigned Drive Paths Support For Multiple Cache Drives in DiskIO Graphs Support For Multiple Unassigned Drives in DiskIO Graphs Monitoring of ALL System Temps Monitoring of ALL System Voltages Monitoring of ALL System Fans Monitoring of RAM DIMM Temps Further GUI Refinements to Assist with Smaller Resolution Monitors Variable Changes Removed Redundant And/Or Unneeded Variables Cleans Up and Reduces Clutter Of Upper Variable Menu Re-Ordered Variables Smaller Length Variables Are Now First (Typically Row 1) Longer Length Variables Are Now Last (Typically Row 2) Standardized Dashboard to Use Single Datasource Instead of 3 Before: Telegraf/Disk/UPS After: Telegraf This Also Keeps the Variables Menu Cleaner With Less Clutter (2 Less Variables!) Standardized All Variables Names in Title Case, Logical Prefixes, and Added Underscores to Separate Words Shortened Variable Label Text When/Where Possible Changed All Panels to Use Default Min Interval Setting of Datasource Set Once in on Datasource and All Panels Not Explicitly Set Will Auto Adjust Only Those Panels Different From the Default Min Interval Are Now Explicitly Set (Example: Array Growth) Modified and Added New Auto-Refresh Time Interval Options In Drop Down Menu Now: 30s,1m,5m,10m,15m,30m,1h,2h,6h,12h,1d Replaced All "Retro LCD" Bar Gauges With "Basic" (Cleaner GUI With Unified Aesthetic) Adjusted All Panel Thresholds to Be More Accurate on Color Changes (See Bug Fixes) Added GROUP BY "time($_interval)" To All Panels Increases Overall Dashboard Performance Removed Min/Max/Avg Values From All Line Graphs to Decrease Screen Width Requirements Shows More Data on Smaller Screens Corrected Various Grammatical Errors Bug Fixes and Optimizations Hundreds of Other Quality of Life and Under the Hood Improvements You Can't See Them, But They're There...In Code...LOTS OF CODE Bug Fixes: Changed Remaining Panels Using FROM "autogen" to "default" Updated All Aliases to Match Panel Names There Were Still Some Discrepancies Adjusted All Threshold Values to Be 1/10th Below Desired Measurement Forces Color Change on Next Whole Number Example: 90% Is Supposed to Be Red, But Would Still Show Preceding Orange Threshold Color (89.9% Resolves This) New Panels: Overwatch System Temps Monitors ALL System Temps (Including CPU) Uses IPMI Unit "degrees_c" to Pull Values Instead of Individual Names Added/Modified Panel Description System Power Monitors ALL System Voltages Uses IPMI Unit "volts" to Pull Values Instead of Individual Names Added/Modified Panel Description Fan Speeds (Replaces Fan Speed Gauges): Monitors ALL System Fans Uses IPMI Unit "rpm" to Pull Values Instead of Induvial Names Also Fixes Issue Where Labels Were Not Being Dynamically Generated Added/Modified Panel Description RAM Load: Show Current Ram Usage % Replaces RAM Used % Disk I/O Unassigned I/O (Read & Write) Adds Support to Monitor Disk I/O of Unassigned Drives Does Not Show Min/Max/Avg Values On Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Added Ability to Show Multiple Unassigned Drives by Serial Number Disk Overview Unassigned Storage Adds Support to Monitor Storage of Unassigned Drives Detailed Server Performance RAM DIMM Temps Adds Support to Monitor RAM DIMM Temps Uses IPMI & REGEX Panel Changes: Overwatch ALL Subpanels Overhauled Look and Feel Array Total Added Sparkline Graph Array Utilized Added Sparkline Graph Array Available Added Sparkline Graph Array Utilized % Added Sparkline Graph Cache Utilized Added Sparkline Graph Cache Utilized % Added Sparkline Graph CPU Load Added Sparkline Graph RAM Load Added Sparkline Graph 1GbE Network Renamed Panel Changed to Orientation to Vertical Added Sparkline Graph 10GbE Network Renamed Panel Changed to Orientation to Vertical Added Sparkline Graph Array Growth (Year) Renamed Panel Previously Named "Array Growth (Annual)" DISK I/O Cache I/O (Read & Write) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Added Ability to Show Multiple Cache Drives by Serial Number Array I/O (Read) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Array I/O (Write) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Disk Overview Array Disk Storage Added Used % Field Now Used to Indicate Drive Free Space By Color Modified Thresholds to Be More Accurate Total Array Storage Renamed Panel Previously Named "Array Storage" Added Used % Field Now Used to Indicate Drive Free Space By Color Modified Thresholds to Be More Accurate Drive Temperatures Renamed Panel Formerly "Drive Temperatures (Celsius)" Added Support For Unassigned Drives Detailed Server Performance Network Interfaces (RX) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network Interfaces (TX) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network 1GBe Renamed Panel Formerly "Network 1GBe (eth0)" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network 10GBe Renamed Panel Formerly "Network 10GBe (eth2)" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens RAM Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens CPU Package Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens CPU 01 Load Renamed Panel Formerly "CPU 01" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Removed REGEX and Manually Set Cores Individually Increases Supportability Makes it Easier For Novice Users by Increasing Query Transparency Ensures Tags Stay Ordered Numerically (1,10,11...2,20,21... Is Now 1,2,...10...20...) Renamed Each Core With +1 Array Order Naming (Core 00 Now = Core 01...) CPU 02 Load Renamed Panel Formerly "CPU 02" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Removed REGEX and Manually Set Cores Individually Increases Supportability Makes it Easier For Novice Users by Increasing Query Transparency Ensures Tags Stay Ordered Numerically (1,10,11...2,20,21... Is Now 1,2,...10...20...) Renamed Each Core With +1 Array Order Naming (Core 00 Now = Core 01...) CPU 01 Core Load Changed Bar Gauge Type From "Retro LCD" to "Basic" Changed Bar Gauge Orientation to Vertical CPU 02 Core Load Changed Bar Gauge Type From "Retro LCD" to "Basic" Changed Bar Gauge Orientation to Vertical Fan Speeds Renamed Panel Formerly "IPMI Fan Speeds" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Updated Panel Descriptions: Overwatch System Temps Note: Uses IPMI System Power Note: Uses IPMI Fan Speeds Note: Uses IPMI Array Total: Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Utilized Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Available Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Utilized % Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Growth (Day) Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Growth (Week) Note: Query Options > Min Interval - Must Match on Week/Month/Year To Stay In Sync Set to 2 Hours by Default For Performance Reasons) - Change Path to "mnt/user" if Cache Drive is Not Present\ Disk Overview Array Disk Storage Note: Uses Variable Array Total Storage Note: Change Path to "mnt/user" if Cache Drive is Not Present Unassigned Storage Note: Uses Variable Drive S.M.A.R.T. Health Summary Removed Description Drive Life Removed Description Detailed Server Performance CPU 01 Core Load Removed Description CPU 02 Core Load Removed Description RAM DIMM Temps Note: Uses IPMI & REGEX Removed/Converted/Deprecated Panels: Overwatch CPU 01 Temp CPU 02 Temp RAM Free % Fan Speed Gauges Variables: New Drives_Unassigned Used to Select Unassigned Drives(s) From Drop Down Menu Path_Unassigned Used to Set a Single Unassigned Drive Path For Inclusion/Exclusion in Drive Panels Add Additional Unassigned Path Variables to Include/Exclude Additional Unassigned Drive Paths Renamed Host Formerly "host" Datasource_Telegraf Formerly "telegrafdatasource" CPU_Threads Formerly "cputhreads" UPS_Max_Watts Formerly "upsmaxwatt" UPS_kWh_Price Formerly "upskwhprice" Currency Formerly "currency" Drives_Flash Formerly "flashdrive" Drives_Cache Formerly "cachedrives" Drives_Parity Formerly "paritydrives" Drives_Array Formerly "arraydrives" Deprecated diskdatasource upsdatasource See Post Number 1 For the New Version 1.4 JSON File!
  25. 3 points
    The time has nearly come. Just finishing up documentation.
  26. 3 points
    Ja stimmt schon, aber wie finanzieren die sich denn. Ich mein schau dir TMDB an. Deren Seite sieht nicht so aus als seien die arm an fähigen Entwicklern. EDIT: Aha, von TiVo kommt die Kohle (Quelle) und die lizenzieren die Daten an andere Firmen weiter. So viel zu "Community". Ich will gefälligst bezahlt werden für meine Updates in die Datenbank. Waren bestimmt 10 Sachen oder so ^^
  27. 3 points
    Unraid 6.9.x est disponible en français dans sa version Beta (actuellement Beta29). Malgré son nom "beta", il est très stable et est en Beta depuis très longtemps. Je suis sur les beta depuis qu'ils sont sorti et pas de soucis.
  28. 3 points
    @Naonak I will build tham ASAP. Prebuilt images are now finished and ready to download.
  29. 3 points
    Supposed to be 6.10 or whatever it's called. They said that in the podcast interview. Make your own builds for now or download ich777 builds when he posts them. Instead of waiting for linuxserver.io builds.
  30. 3 points
    to get german the interface in german language add the following to papermerge.conf.py in appdata folder LANGUAGE_CODE = "de-DE" to get german language for OCR go to docker shell and type apt-get install tesseract-ocr-deu and change the following in papermerge.conf.py like this OCR_DEFAULT_LANGUAGE = "deu" OCR_LANGUAGES = { "deu": "Deutsch", } And voila! German interface and OCR.
  31. 3 points
    I would suggest removing the plug in, then moving everything. I'm working on getting a Dev box up and running so I can continue working on this for the new Beta.
  32. 3 points
    It's not the socks credentials you need, it's your main username (pXXXXXXX) & password for PIA.
  33. 3 points
    Added work around: So can see drive on 2116 FYI my FW and Bios Vers. mpt2sas_cm1: LSISAS2116: FWVersion(, ChipRevision(0x02), BiosVersion(
  34. 3 points
    Because its been asked a few times - Yes i am working on WireGuard support now for PIA, its going ok but there is a fair bit to integrate it with the existing code so it may take a little while to code, but hopefully should be done fairly 'soon' (trademark limtech 🙂). WireGuard users using other VPN providers (non PIA) - Question for you, is your wireguard config file static, or is there any dynamically generated parts to it?, please detail what is dynamic (if anything) and your VPN provider name please, im trying to make any code i do VPN provider agnostic as much as possible.
  35. 3 points
    I decided to take a gamble and changed preview to nightly (edited template and replaced preview with nightly under Repository ) and it updated and it no longer complains. was on under preview and now on under nightly ... so worked i guess?
  36. 3 points
    Support for multi remote endpoints and PIA 'Next-Gen' networks now complete, see Q19 and Q20 for details:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  37. 3 points
    Running on a 3700X here. I can confirm this bug is fixed.
  38. 3 points
  39. 3 points
    Don't do that unless you have bonded the UPS ground to earth through some other means. Any current imbalance between the equipment powered by the UPS and other equipment still grounded will travel through whatever connections are still there, probably your network at the very least. Turn off the circuit at the breaker, or temporarily plug the UPS into a switched outlet or power strip, but don't just yank the plug out of the wall.
  40. 3 points
    You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 🧐 ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
  41. 3 points
    Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
  42. 3 points
    OpenVPN support for PIA 'next-gen' network is now in, see Q19 for how to switch from legacy network to next-gen in the link below Multi remote endpoint support is now in see Q20 for how to define multiple endpoints (OpenVPN only) in the link below WireGuard support is now included, see Q21 for how to switch from OpenVPN to WireGuard in the link below https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  43. 2 points
    Ich kann dir leider bei Macinabox nicht weiterhelfen evtl aber einer dieser user: @angelstriker, @thilo, @wubbl0rz, @ipxl Gibt schon einen Thread hier wegen Macinabox nur leider zu einem anderen Thema: Klick
  44. 2 points
  45. 2 points
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  46. 2 points
    Current plan is to revert back to libvirt-6.5 but we are going to try and test libvirt-6.8 today though I don't see any commit that references this issue: https://gitlab.com/libvirt/libvirt/-/commits/master Not even sure how libvirt team manages post-release bug fixes. Their "maintenance" branches seem to end with v5.3.
  47. 2 points
    Now that I've got some time to kill, I'm more than happy to clear up some things... That's you telling (fabricating) that my offer wasn't sincere. If I recall correct, I told you I was more than happy to do it AND that I would try and get outside help onboard, from linuxserver and selfhosters team members. I also did not say that I had a problem with a "moderated system", I only said/implied that I had a problem with a moderated system that's moderated by you and how you are feeling when you get out of bed in the morning. This is mainly due to a lack of knowledge on your part and not wanting to spend the time to research the differences by comparing the repos and the offered features of the different images. From personal experience I can say that the Plex Inc. container can be very lackluster when it comes to updates. I recall a time I was waiting almost more than 2 weeks before I finally gave up and ditched their container. Some of their other tags at one time hadn't seen any security updates in over a year (if you think that's fine...be my guest). Linuxserver and hotio are very similar, although hotio provides an additional tag that can be useful for some people on top of some environment variables that can aid in server discovery and in case of a lockout. I can't say much about binhex, besides that it'll be probably arch based. I have never heard about needio and dockerhub gives me 1 result of a image that's severaly outdated. But, lets not forget that this all started with you wanting to remove the official Scrutiny containers, that's very wel maintained by the developer of Scrutiny the app. Selhosters merely provided the means to let people enjoy those containers where the developer has no interest in supporting CA. The fact that linuxserver and hotio also made a container, probably is a result for me personally because I felt that the official one didn't behave like I wanted it to and thought I could do better. Your rules also dictate if I recall correct that it is not allowed to put other peoples stuff in the template repo, exception being selfhosters. Here you are just making up bad chosen examples to prove your point. If Candy Crush would fit to be put in a docker container, I'm sure you'll find multiple on Docker Hub. But, why don't you try doing a search for "linux distro" on Google...see what comes up then, why so many? They all run the same kernel basically....well that's FOSS at work for you. You keep saying that, but stifling innovation is exaclty what you are doing. Many have looked at CA and said "screw it, not going to bother with these rules" and moved on. "Any one can always install a random app they find on docker hub." You should try doing it as a new user, telling on the forum that you installed something from DockerHub...it'll take only a couple of minutes before somebody comes telling you that it's a stupid way of doing things, just install CA they'll say. "for the benefit of the Unraid community", "to not confuse the users",...all these nice catchphrases, they sound marvelous. Newsflash: it are those users aka community you keep using as a shield who are requesting these "multiple" containers to be included, because they like choice and don't eat only vanilla ice cream their entire life. "and in cases of disputes I accept the decisions made by the moderators and the administrators of this forum itself." This is absolutely true, but then you go and rewrite CA and make up other rules (which can change at a whim) to get your way anyway. Let's hope you'll be making better decisions in the future, otherwise I have my doubts on it being "safe and secure", pushing people towards a container that hasn't seen security updates in over a year does not equal "safe and secure".
  48. 2 points
    I am not 100% sure if my problem is related to the one above, but I have several VMs that use a pass through unassigned device as main disk. Since Beta 29 non of them boots anymore with the following error (example): Execution error Unable to get devmapper targets for /dev/sdh: No such file or directory In the LIBVIRT log, I got the following entries: 2020-09-29 09:29:24.847+0000: 6045: error : virDevMapperOnceInit:78 : internal error: Unable to find major for device-mapper 2020-09-29 09:29:24.847+0000: 6045: error : qemuSetupImagePathCgroup:91 : Unable to get devmapper targets for /dev/sdh: Success 2020-09-29 09:45:55.392+0000: 6048: warning : qemuDomainObjTaint:5983 : Domain id=2 name='Windows 10' uuid=e2a7c568-efce-61c9-a7ad-789c710201fc is tainted: high-privileges 2020-09-29 09:45:55.392+0000: 6048: warning : qemuDomainObjTaint:5983 : Domain id=2 name='Windows 10' uuid=e2a7c568-efce-61c9-a7ad-789c710201fc is tainted: host-cpu 2020-09-29 09:45:55.408+0000: 6048: error : qemuSetupImagePathCgroup:91 : Unable to get devmapper targets for /dev/sdh: No such file or directory Any idea to what this problem is related or how it can be solved? Maybe with the next beta?
  49. 2 points
    @testdasi You may want to consider this. I’m tentatively planning on adding Varken/Plex panels/stat tracking to the Ultimate UNRAID Dashboard (UUD) in version 1.5. Not a guarantee until I get into it, but if I can integrate some/all of it, that would be cool.
  50. 2 points
    Hi Everyone, Thank you for the great information in this thread. I am adding a few more tweaks and notes below in order to run the native UNRAID Dynamics Wireguard simultaneously with the Linuxserver Wireguard Docker. I now have two working versions of Wireguard running on my machine with one specifically for use with whatever Dockers I decide to add to the new Wireguard VPN. When intially created, I named my new docker "wireguard4dockers" as shown below. When downloaded, you have to add a lot of the variables into the template, so this takes time, and if you have an error (like I did the first time), you might think you have lost the data after you click the "apply" button and the template disappears; but if you go to the CA "APPS" tab, you can reinstall the template and pick right back up where you left off. First off, since you are adding this as a new docker and probably have Wireguard set up on UNRAID already, when you begin to enter your specific information into the template, change the ListenPort so you don't have a conflicting port between this Wireguard docker and the built in Wireguard in UNRAID. By Default the UNRAID Wireguard listenport is 51820, which is also the standard listenport of the Linuxserver docker. Secondly, make sure that you set your config properly so the docker saves into your "appdata" folder using, Container Path: /config and Host Path to "your specific location". I initially did not set it up properly and couldn't figure out why my folder was blank, until I realized that I did not put the slash in front of "config". Also, don't forget to add the "config" folder inside your own "wireguard4docker" folder. I also changed my internal SUBNET to something completely different from the built-in Wireguard to avoid any conflicts. Not sure if this was necessary, but I thought it couldn't hurt. Also, take note that once you get the template created and it has saved as an operational Docker, if you import a pre-made config file into your config folder for this docker, you need to change the name of the file to "wg0" (that lower case w, g and a zero) or create your new template named as "wg0". This was noted on one of the many pages of posts in the links that danofun included above. Lastly, I had to include my specific Local LAN IP Address in the config file in the "PostUp" and "PostDown" lines ...part of another tip mentioned to be added to the config file in previous posts; in my file these two lines are: PostUp=ip route add via $(ip route |awk '/default/ {print $3}') dev eth0 PostDown=ip route del via $(ip route |awk '/default/ {print $3}') dev eth0 .....and going this route I did NOT need to add the following environmental variable into the docker template: "LAN_NETWORK ....populated with your LAN (i.e." noted to add in the above posts. Here are a few snippets of my "wireguard4docker" template. Please note, I also downloaded the Firefox docker to use to check out connectivity, following other posts on how to link other dockers to your vpn docker. Firefox Ports to add while you are setting up the "wireguard4docker" VPN created under the "advanced view": port 7814 will be your port to use to get into the firefox webgui. Using the posts in this thread, as well and the links provided by everyone, I was able to create a fully functioning secondary Wireguard Docker VPN running in less than an hour. Proving that we can in fact use an off the shelf existing Wireguard docker template to be used as a VPN for specific Docker Containers, while at the same time utilize the built in Wireguard Controls for your other VPN needs. Thus overcoming the bottleneck and limitation of not being able to have a "VPN tunneled access" running along with another tunnel instance within the built-in UNRAID Dynamics Wireguard Program. I hope this helps others. I really haven't done anything different other than compile a few critical pieces of information under the same thread. I spend a lot of time browsing the forum for information and am always amazed at what can be found here; but having run through this process this evening I thought this additional data might be helpful for other to have. Kudo's and a big thankyou to everyone prior who pave the way for amateur's like myself who are able to stumble through to make something work and confirm what others have accomplished does in fact work. Thanks,