Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/29/20 in Posts

  1. 8 points
    Something really cool happened. I woke up this morning and my profile had this green wording on it. I just officially became the newest UNRAID "Community Developer". I just wanted to say thanks to UNRAID and @SpencerJ for bestowing this honor. I am pleased to be able to add to the community overall. You guys rock! Honestly, I have been a part of many forums over the years, and I have never seen a community so eager to help, never condescending, always maintaining professional decorum, and overall just a great place to be. I'm proud to be a part of it!
  2. 6 points
    Sure is. When Big Sur is released officially a new Macinabox will be out. With a bunch of new features and choice between opencore and clover bootloaders.
  3. 5 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottum of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with nVidia & DVB drivers (currently DigitalDevices, LibreElec, XBOX One USB Adapter and TBS OpenSource drivers selectable) optionally you can also enable ZFS, iSCSI Target, Intel iGPU and Mellanox Firmware Tools (Mellanox only for 6.9.0 and up) support. nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. You also can set the ZFS version from 'latest' to 'master' to build from the latest branch from Github if you are using the 6.9.0 repo of the container. iSCSI Target: Please note that this feature is at the time command line only! The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's. ATTENTION: Always mount a block volume with the path: '/dev/disk/by-id/...' (otherwise you risk data loss)! For instructions on how to create a target read the manuals: Manual Block Volume.txt Manual FileIO Volume.txt ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). Plugin now available (will show all informations about the images/drivers/modules that it can get): https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Or simply download it through the CA App This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Beta Build (the following is a tutorial for v6.9.0): Upgrade to your preferred stock beta version first, reboot and then start building (to avoid problems)! Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' (now you can also put in for example: 'beta25' without quotes to automaticaly download Unraid v6.9.0-beta25 and the other steps are not required anymore) Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) (You can get the files bzimage bzroot bzmodules bzfirmware also from the Beta zip file from Limetch or better you first upgrade to that Beta version and then copying over the files from your /boot directory to the directory created in step 5 to avoid problems) !!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!! IRC: irc.minenet.at:6697 Here you can download the prebuilt images: Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 450.66) Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 450.66 | ZFS version: 0.8.4) Unraid Custom DVB builtin v6.8.3: Download (LE driver: 1.4.0) Unraid Custom ZFS builtin v6.8.3: Download (ZFS version: 0.8.4) Unraid Custom iSCSI builtin v6.8.3: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66) Unraid Custom nVidia & DVB builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta25: Download (nVidia beta driver: 450.66 | ZFS Build from 'master' branch on Github on 2020.08.19) Unraid Custom ZFS builtin v6.9.0 beta25: Download (ZFS Build from 'master' branch on Github on 2020.07.12) Unraid Custom iSCSI builtin v6.9.0 beta25: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04) Unraid Custom nVidia & DVB builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta29: Download (nVidia beta driver: 455.23.04 | ZFS v2.0.0-rc2) Unraid Custom ZFS builtin v6.9.0 beta29: Download (ZFS v2.0.0-rc2) Unraid Custom iSCSI builtin v6.9.0 beta29: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta30 Download (nVidia driver: 455.28) Unraid Custom nVidia & DVB builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta30: Download (nVidia driver: 455.28 | ZFS 0.8.5) Unraid Custom ZFS builtin v6.9.0 beta30: Download (ZFS 0.8.5) Unraid Custom iSCSI builtin v6.9.0 beta30: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt If you like my work, please consider making a donation
  4. 5 points
    This blog is a guide on how to securely back up one Unraid server to another geographically separated Unraid server using rsync and Wireguard by @spx404. If you have questions, comments or just want to say hey, post them here! https://unraid.net/blog/unraid-server-to-server-backups-with-rsync-and-wireguard
  5. 5 points
    can the stable ver 6.8.3 get the latest nvidia drivers? PLEX betas (and likely next stable) now require 450.66 or later drivers to keep nvenc operational -- hw transcoding is broken on latest versions because of this. i prefer to not run betas, so essentially i am stuck on a previous working plex version (1.20.1.3252-1-01) https://www.reddit.com/r/PleX/comments/j0bzu1/it_was_working_before_but_now_im_having_issues/ https://forums.plex.tv/t/help-fixing-my-broken-nvidia-nvenc-driver-config-ubuntu-20-04/637854 https://forums.plex.tv/t/nvenc-hardware-encoding-broken-on-1-20-2-3370/637925
  6. 5 points
    OK Testing is now over, i am satisfied that PIA Next-Gen network using OpenVPN is working well enough to push this to production, so the changes have now been included in the 'latest' tagged image, if you have been using the 'test' tagged image then please drop ':test' from the repository name and click on apply to pick up the latest again. If you want to switch from current to next-gen then please generate a new ovpn file using the following procedure:- Note:- The new image will still support current (or legacy) PIA network for now, so you can use either, dictated to by the ovpn file.
  7. 5 points
    Added prebuilt images for Unriad v6.9.0beta29 to the bottom of the first post. Prebuilt images include: nVidia nVidia & DVB nVidia & ZFS ZFS iSCSI
  8. 4 points
    This was an idea of @frodr. After several testings I came to the following script which: Caclulates how many Video files (small parts of them) will fit into 50% of the free RAM (amount can be changed) Obtains the X recent Movies / Episodes (depending on the used path) Preloads 60MB of the Video file leader and 1MB of the ending into the RAM Preloads subtitle files that belong to the preloaded Video file Now, if your disks are sleeping and you start a Movie or Episode through Plex, the client will download the Video parts from RAM and while the buffer is emptied, the HDD spins up and the buffer fills up again. This means all preloaded Movies / Episodes will start directly without delay. Notes: It does not reserve any RAM, so your RAM stays fully available RAM preloading is not permanent and will be overwritten through server uploads/downloads or processes over the time. I suggest to execute this script 1x per day (only missing Video files will be touched) For best reliability execute the script AFTER all your backup scripts are completed (as for an example rsync can use the complete available RAM while syncing) If you suffer from buffering at the beginning of a Movie / Episode, try to raise "video_min_size" All preloaded Videos can be found in the script's log (CA User Scripts) Script #!/bin/bash # ##################################### # Script: Plex Preloader v0.9 # Description: Preloads the recent video files of a specific path into the RAM to bypass HDD spinup latency # Author: Marc Gutt # # Changelog: # 0.9 # - Preloads only subtitle files that belong to preloaded video files # 0.8 # - Bug fix: In some situations video files were skipped instead of preloading them # 0.7 # - Unraid dashboard notification added # - Removed benchmark for subtitle preloading # 0.6 # - multiple video path support # 0.5 # - replaced the word "movie" against "video" as this script can be used for TV Shows as well # - reduced preload_tail_size to 1MB # 0.4 # - precleaning cache is now optional # 0.3 # - the read cache is cleaned before preloading starts # 0.2 # - preloading time is measured # 0.1 # - first release # # ######### Settings ################## video_paths=( "/mnt/user/Movies/" "/mnt/user/TV/" ) video_min_size="2000MB" # 2GB, to exclude bonus content preload_head_size="60MB" # 60MB, raise this value if your video buffers after ~5 seconds preload_tail_size="1MB" # 10MB, should be sufficient even for 4K video_ext='avi|mkv|mov|mp4|mpeg' # https://support.plex.tv/articles/203824396-what-media-formats-are-supported/ sub_ext='srt|smi|ssa|ass|vtt' # https://support.plex.tv/articles/200471133-adding-local-subtitles-to-your-media/#toc-1 free_ram_usage_percent=50 preclean_cache=0 notification=1 # ##################################### # # ######### Script #################### # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi; trap 'rmdir "/tmp/${0///}"' EXIT; # check user settings video_min_size="${video_min_size//[!0-9.]/}" # float filtering https://stackoverflow.com/a/19724571/318765 video_min_size=$(awk "BEGIN { print $video_min_size*1000000}") # convert MB to Bytes preload_head_size="${preload_head_size//[!0-9.]/}" preload_head_size=$(awk "BEGIN { print $preload_head_size*1000000}") preload_tail_size="${preload_tail_size//[!0-9.]/}" preload_tail_size=$(awk "BEGIN { print $preload_tail_size*1000000}") # clean the read cache if [ "$preclean_cache" = "1" ]; then sync; echo 1 > /proc/sys/vm/drop_caches fi # preload preloaded=0 skipped=0 preload_total_size=$(($preload_head_size + $preload_tail_size)) free_ram=$(free -b | awk '/^Mem:/{print $7}') free_ram=$(($free_ram / 100 * $free_ram_usage_percent)) echo "Available RAM in Bytes: $free_ram" preload_amount=$(($free_ram / $preload_total_size)) echo "Amount of Videos that can be preloaded: $preload_amount" # fetch video files while IFS= read -r -d '' file; do if [[ $preload_amount -le 0 ]]; then break; fi size=$(stat -c%s "$file") if [ "$size" -gt "$video_min_size" ]; then TIMEFORMAT=%R benchmark=$(time ( head -c $preload_head_size "$file" ) 2>&1 1>/dev/null ) echo "Preload $file (${benchmark}s)" if awk 'BEGIN {exit !('$benchmark' >= '0.150')}'; then preloaded=$((preloaded + 1)) else skipped=$((skipped + 1)) fi tail -c $preload_tail_size "$file" > /dev/null preload_amount=$(($preload_amount - 1)) video_path=$(dirname "$file") # fetch subtitle files find "$video_path" -regextype posix-extended -regex ".*\.($sub_ext)" -print0 | while IFS= read -r -d '' file; do echo "Preload $file" cat "$file" >/dev/null done fi done < <(find "${video_paths[@]}" -regextype posix-extended -regex ".*\.($video_ext)" -printf "%T@ %p\n" | sort -nr | cut -f2- -d" " | tr '\n' '\0') # notification if [[ $preloaded -eq 0 ]] && [[ $skipped -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i alert -s "Plex Preloader failed!" -d "No video file has been preloaded (wrong path?)!" elif [ "$notification" = "1" ]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Plex Preloader has finished" -d "$preloaded preloaded (from Disk) / $skipped skipped (already in RAM)" fi
  9. 4 points
    Hi all It's not often I write in forums or express my opinions. But today is a day I feel I must. I have been using unraid for quite a few years now. I believe it's been over 5 years. But for the life of me cannot find the email from when I bought my licence. When I embarked on the journey of wanting raid like features but without actually using a traditional raid. I looked over loads of options and even bought and tried FlexRAID and thought it was working well until I had a drive failure that it did not warn me of and data lost because it refused to recover. Plus a few other issues but I do not like to badmouth other products. So I started looking again and saw the various offerings but none of them was really user friendly for a noob like me. Then I happened to stumble onto unraid and thought I would give the limited (I believe it was 3 hdd) trial a go. Oh it was everything I ever needed in a storage solution simple to use, can upgrade hdds down the line without a painful process etc. A short while later disaster hit and and I suffered a failed 3TB seagate hdd (I know plenty of us have suffered losses due to seagate). But I still had access to all my data until a new drive came. Popped in the new drive and it rebuilt and away I went again. Then one day I thought I really want to upgrade my array to handle 6TB drives and was not looking forward to it. Found it was just easy to do again. rebuilds happened and once again Unraid is just working, Last week another seagate 3tb failed (surprised it lasted so long) And as I am writing this a 6TB hdd has replaced it and is 1.5% into the rebuild. So really thank you Lime Technology for actually selling a product that really works well and kept active development adding items. (also adding more drives to the plus licence). Well done Kevin EDIT Just found my reg date Sat 04 Oct 2014 09:10:13 PM BST
  10. 4 points
  11. 4 points
    I've been trialling Nextcloud as a cloud backup service for myself, and if successful, my family which live remotely. I'm using Duplicati to perform the backups, but that's not the point of this guide. The point is that when I backup files to Nextcloud, the Docker Image slowly fills up. I've never had it reach a point where the Image reaches 100%, but it's probably not a fun time. After searching the internet for hours trying to find anything, I eventually figured out what was required. For context, I'm running Nextcloud behind a Reverse Proxy, for which I'm using Swag (Let's Encrypt). Through trial and error, the behaviour I observed is that when uploading files (via WebDAV in my case), they get put in the /tmp folder of Swag. Once they are fully uploaded, they are copied across to Nextcloud's /temp directory. Therefore, both paths need to be added as Bind mounts for this to work. What To Do Head over to the Docker tab, and edit Nextcloud and add a new Path variable: Name: Temp Container Path: /tmp Host Path: /mnt/user/appdata/nextcloud/temp Next edit the Swag (or Let's Encrypt) container, and add a new Path variable: Name: Temp Container Path: /var/lib/nginx/tmp Host Path: /mnt/user/appdata/swag/temp And that's it! Really simple fix, but no one seemed to have an answer. Now when I backup my files, the Docker image no longer fills up.
  12. 4 points
    I'm using Unraid for a while now and collected some experience to boost the SMB transfer speeds: 1.) Choose the right CPU The most important part is to understand that SMB is single-threaded. This means SMB uses only one CPU core to transfer a file. This is valid for the server and the client. Usually this is not a problem as SMB does not fully utilize a CPU core (except of real low powered CPUs). But Unraid adds, because of the ability to split shares across multiple disks, an additional process called SHFS and its load raises proportional to the transfer speed, which could overload your CPU core. So the most important part is, to choose the right CPU. At the moment I'm using an i3-8100 which has 4 cores and 2257 single thread passmark points: And since I have this single thread power I'm able to use the full bandwith of my 10G network adapter which was not possible with my previous Intel Atom C3758 (857 points) although both have comparable total performance. I even was not able to reach 1G speeds while a parallel Windows Backup was running (see next section to bypass this limitation). Now I'm able to transfer thousands of small files and parallely transfer a huge file with 250 MB/s. With this experience I suggest a CPU that has around 1400 single thread passmark points to fully utilize a 1G ethernet port. As an example: The smallest CPU I would suggest for Unraid is an Intel Pentium Silver J5040. P.S. Passmark has a list sorted by single thread performance for desktop CPUs and server CPUs. 2.) Bypass single-thread limitation The single-thread limitation of SMB and SHFS can be bypassed through opening multiple connections to your server. This means connecting to "different" servers. The easiest way to accomplish that, is to use the ip-address of your server as a "second" server while using the same user login: \\tower\sharename -> best option for user access through file explorer as it is automatically displayed \\10.0.0.2\sharename -> best option for backup softwares, you could map it as a network drive If you need more connections, you can add multiple entries to your windows hosts file (Win+R and execute "notepad c:\windows\system32\drivers\etc\hosts"): 10.0.0.2 tower2 10.0.0.2 tower3 Results If you now download a file from your Unraid server through \\10.0.0.2 while a backup is running on \\tower, it will reach the maximum speed while a download from \\tower is massively throttled: 3.) Bypass Unraid's SHFS process If you enable access directly to the cache disk and upload a file to //tower/cache, this will bypass the SHFS process. Beware: Do not move/copy files between the cache disk and shares as this could cause data loss! The eligible user account will be able to see all cached files, even those from other users. Temporary Solution or "For Admins only" As Admin or for a short test you could enable "disk shares" under Settings -> Global Share Settings: By that all users can access all array and cache disks as SMB shares. As you don't want that, your first step is to click on each Disk in the WebGUI > Shares and forbid user access, except for the cache disk, which gets read/write access only for your "admin" account. Beware: Do not create folders in the root of the cache disk as this will create new SMB Shares Safer Permanent Solution Use this explanation. Results In this thread you can see the huge difference between copying to a cached share or copying directly to the cache disk. 4.) Enable SMB Multichannel + RSS SMB Multichannel is a feature of SMB3 that allows splitting file transfers across multiple NICs (Multichannel) and multiple CPU Cores (RSS) since Windows 8. This will raise your throughput depending on your amount of NICs, NIC bandwidth, CPU and used settings: This feature is experimental SMB Multichannel is considered experimental since its release with Samba 4.4. The main bug for this state is resolved in Samba 4.13. The Samba developers plan to resolve all bugs with 4.14. Unraid 6.8.3 contains Samba 4.11. This means you use Multichannel on your own risk! Multichannel for Multiple NICs Lets say your mainboard has four 1G NICs and your Client has a 2.5G NIC. Without Multichannel the transfer speed is limited to 1G (117,5 MByte/s). But if you enable Multichannel it will split the file transfer across the four 1G NICs boosting your transfer speed to 2.5G (294 MByte/s): Additionally it uses multiple CPU Cores which is useful to avoid overloading smaller CPUs. To enable Multichannel you need to open the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf And add the following to it: server multi channel support = yes Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Then restart the Samba service with this command: samba restart Eventually you need to reboot your Windows Client, but finally its enabled and should work. Multichannel + RSS for Single and Multiple NICs But what happens if you're server has only one NIC. Now Multichannel is not able to split something, but it has a sub-feature called RSS which is able to split file transfers across multiple CPU cores with a single NIC: Of course this feature works with multiple NICs: And this is important, because it creates multiple single-threaded SMB processes and SHFS processes which are now load balanced across all CPU cores, instead of overloading only a single core. So if your server has slow SMB file transfers while your overall CPU load in the Unraid WebGUI Dashboard is not really high, enabling RSS will boost your SMB file transfer to the maximum! But it requires RSS capability on both sides. You need to check your servers NIC by opening the Unraid Webterminal and entering this command (could be obsolete with Samba 4.13 as they built-in an RSS autodetection ) egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 Now you can check your Windows 8 / Windows 10 client by opening Powershell as Admin and enter this command: Get-SmbClientNetworkInterface It must return "True" for "RSS Capable": Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name --------------- ----------- ------------ ----- ----------- ------------- 11 True False 10 Gbps {10.0.0.10} Ethernet 3 Now, after you are sure that RSS is supported on your server, you can enable Multichannel + RSS by opening the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf Add the following and change 10.10.10.10 to your Unraid servers IP and speed to "10000000000" for 10G adapter or to "1000000000" for a 1G adapter: server multi channel support = yes interfaces = "10.10.10.10;capability=RSS,speed=10000000000" If you are using multiple NICs the syntax looks like this (add RSS capability only for supporting NICs!): interfaces = "10.10.10.10;capability=RSS,speed=10000000000" "10.10.10.11;capability=RSS,speed=10000000000" Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Now restart the SMB service: samba restart Does it work? After rebooting your Windows Client (seems to be a must), download a file from your server (so connection is established) and now you can check if Multichannel + RSS works by opening Windows Powershell as Admin and enter this command: Get-SmbMultichannelConnection -IncludeNotSelected It must return a line similar to this (a returned line = Multichannel works) and if you want to benefit from RSS then "Client RSS Cabable" must be "True": Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 10.10.10.100 10.10.10.10 11 13 True False If you are interested in test results, look here. 5.) smb.conf Settings Tuning At the moment I'm doing intense tests with different SMB config settings found on different websites: https://wiki.samba.org/index.php/Performance_Tuning https://wiki.samba.org/index.php/Linux_Performance https://wiki.samba.org/index.php/Server-Side_Copy https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/speed.html https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html https://lists.samba.org/archive/samba-technical/attachments/20140519/642160aa/attachment.pdf https://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf https://www.samba.org/samba/docs/current/man-html/ (search for "vfs") https://lists.samba.org/archive/samba/2016-September/202697.html https://codeinsecurity.wordpress.com/2020/05/18/setting-up-smb-multi-channel-between-freenas-or-any-bsd-linux-and-windows-for-20gbps-transfers/ https://www.snia.org/sites/default/files/SDC/2019/presentations/SMB/Metzmacher_Stefan_Samba_Async_VFS_Future.pdf https://www.heise.de/newsticker/meldung/Samba-4-12-beschleunigt-Verschluesselung-und-Datentransfer-4677717.html I will post my results after all tests have been finished. By now I would say it does not really influence the performance as recent Samba versions are already optimized, but we will see. 6.) Choose a proper SSD for your cache You could use Unraid without an SSD, but if you want fast SMB transfers an SSD is absolutely required. Else you are limted to slow parity writes and/or through your slow HDD. But many SSDs on the market are not "compatible" for using it as an Unraid SSD Cache. DRAM Many cheap models do not have a DRAM Cache. This small buffer is used to collect very small files or random writes before they are finally written to the SSD and/or is used to have a high speed area for the file mapping-table. In Short, you need DRAM Cache in your SSD. No exception. SLC Cache While DRAM is only absent in cheap SSDs, SLC Cache can miss in different price ranges. Some cheap models use a small SLC cache to "fake" their technical data. Some mid-range models use a big SLC Cache to raise durability and speed if installed in a client pc. And some high-end models do not have an SLC Cache, as their flash cells are fast enough without it. Finally you are not interested in SLC Cache. You are only interested in continuous write speeds (see "Verify Continuous Writing Speed") Determine the Required Writing Speed But before you are able to select the right SSD model you need to determine your minimum required transfer speed. This should be simple. How many ethernet ports do you want to use or do you plan to install a faster network adapter? Lets say you have two 1G ports and plan to install a 5G card. With SMB Multichannel its possible to use them in sum and as you plan to install a 10G card in your client you could use 7G in total. Now we can calculate: 7G * 117.5 MByte/s (real throughput per 1G ethernet) = 822 MByte/s and by that we have two options: buy one M.2 NVMe (assuming your motherboard has such a slot) with a minimum writing speed of 800 MByte/s buy two or more SATA SSDs and use them in a RAID0, each with a minimum writing speed of 400 MByte/s Verify Continuous Writing Speed of the SSD As an existing "SLC Cache" hides the real transfer speed you need to invest some time to check if your desired SSD model has an SLC cache and how much the SSD throttles after its full. A solution could be to search for "review slc cache" in combination with the model name. Using the image search could be helpful as well (maybe you see a graph with a falling line). If you do not find anything, use Youtube. Many people out there test their new ssd by simply copying a huge amount of files on it. Note: CrystalDiskMark, AS SSD, etc Benchmarks are useless as they only test a really small amount of data (which fits into the fast cache). Durability You could look for the "TBW" value of the SSD, but finally you won't be able to kill the SSD inside the warranty as long your very first filling of your unraid server is done without the SSD Cache. As an example a 1TB Samsung 970 EVO has a TBW of 600 and if your server has a total size of 100TB you would waste 100TBW on your first fill for nothing. If you plan to use Plex, think about using the RAM as your transcoding storage which would save a huge amount of writes to your SSD. Conclusion: Optimize your writings instead of buying an expensive SSD. NAS SSD Do not buy "special" NAS SSDs. They do not offer any benefits compared to the high-end consumer models, but cost more. 7.) More RAM More RAM means more caching and as RAM is even faster than the fastest SSDs, this adds additional boost to your SMB transfers. I recommend installing two identical (or more depening on the amount of slots) RAM modules to benefit from "Dual Channel" speeds. RAM frequency is not as important as RAM size. Read Cache for Downloads If you download a file twice, the second download does not read the file from your disk, instead it uses your RAM only. The same happens if you're loading covers of your MP3s or Movies or if Windows is generating thumbnails of your photo collection. More RAM means more files in your cache. The read cache uses by default 100% of your free RAM. Write Cache for Uploads Linux uses by default 20% of your free RAM to cache writes, before they are written to the disk. You can use the Tips and Tweaks Plugin to change this value or add this to your Go file (with the Config Editor Plugin) sysctl vm.dirty_ratio=20 But before changing this value, you need to be sure to understand the consequences: Never use your NAS without an UPS if you use write caching as this could cause huge data loss! The bigger the write cache, the smaller the read cache (so using 100% of your RAM as write cache is not a good idea!) If you upload files to your server, they are 30 seconds later written to your disk (vm.dirty_expire_centisecs) Without SSD Cache: If your upload size is generally higher than your write cache size, it starts to cleanup the cache and in parallel write the transfer to your HDD(s) which could result in slow SMB transfers. Either you raise your cache size, so its never filled up, or you consider totally disabling the write cache. With SSD Cache: SSDs love parallel transfers (read #6 of this Guide), so a huge writing cache or even full cache is not a problem. But which dirty_ratio value should you set? This is something you need to determine by yourself as its completely individual: At first you need to think about the highest RAM usage that is possible. Like active VMs, Ramdisks, Docker containers, etc. By that you get the smallest amount of free RAM of your server: Total RAM size - Reserved RAM through VMs - Used RAM through Docker Containers - Ramdisks = Free RAM Now the harder part: Determine how much RAM is needed for your read cache. Do not forget that VMs, Docker Containers, Processes etc load files from disks and they are all cached as well. I thought about this and came to this command that counts hot files: find /mnt/cache -type f -amin -86400 ! -size +1G -exec du -bc {} + | grep total$ | cut -f1 | awk '{ total += $1 }; END { print total }' | numfmt --to=iec-i --suffix=B It counts the size of all files on your SSD cache that are accessed in the last 24 hours (86400 seconds) The maximum file size is 1GiB to exclude VM images, docker containers, etc This works only if you hopefully use your cache for your hot shares like appdata, system, etc Of course you could repeat this command on several days to check how it fluctuates. This command must be executed after the mover has finished its work This command isn't perfect as it does not count hot files inside a VM image Now we can calculate: 100 / Total RAM x (Free RAM - Command Result) = vm.dirty_ratio If your calculated "vm.dirty_ratio" is lower than 5% (or even negative), you should lower it to 5 and buy more RAM. between 5% and 20%, set it accordingly, but you should consider buying more RAM. between 20% and 90%, set it accordingly If your calculated "vm.dirty_ratio" is higher than 90%, you are probably not using your SSD cache for hot shares (as you should) or your RAM is huge as hell (congratulation ^^). I suggest not to set a value higher than 90. Of course you need to recalcuate this value if you add more VMs or Docker Containers. #8 Disable haveged Unraid does not trust the randomness of linux and uses haveged instead. By that all encryptions processes on the server use haveged which produces extra load. If you don't need it, disable it through your Go file (CA Config Editor) as follows: # ------------------------------------------------- # disable haveged as we trust /dev/random # https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452 # ------------------------------------------------- /etc/rc.d/rc.haveged stop
  13. 3 points
    Unraid is a cut down version of slackware, specifically stripped of everything that's not needed, because it loads into and runs entirely in RAM. We don't have the luxury of just slapping every single driver and support package into it, you would end up with a minimum 16GB or 32GB RAM spec. Before VM and docker container support was added, you could have all the NAS functionality with 1GB of RAM. Now, 4GB is the practical bare minimum for NAS, even if you don't use VM's and containers, and 8GB is still cramped. Adding support for a single adapter that works well in slackware, providing the manufacturer keeps up with linux kernel development shouldn't be an issue. That way we can tell people if you want wifi, here is a list of cards using that driver that are supported. It's the blanket statement of "lets support wifi" that doesn't work. BTW, even if we do get that golden ticket wifi chip support from the manufacturer and Unraid supports it perfectly, the forums will still be bombarded with performance issues because either their router sucks or their machine isn't in the zone of decent coverage, or their neighbours cause interference at certain times of day, etc. Bottom line, wifi on a server just isn't ready for primetime yet. Desktop daily drivers, fine. 24/7/365 servers with constant activity from friends and family, no. It's much easier support wise to require wired. If the application truly has to have wireless, there are plenty of ways to bridge a wireless signal and convert it to wired. A pair of commercial wifi access points with a dedicated backhaul channel works fine, that's what I use in a couple locations.
  14. 3 points
    But helium ALWAYS goes up. Right?🤣
  15. 3 points
    Also folgendes; Ich habe jetzt ein komplettes VLAN nur für Docker Container und unRAID Server aufgebaut. Zum einen bin ich der Meinung dass es sowieso nicht schaden kann wenn jeder Container seine eigene IP-Adresse hat, (macht auch die Portauswahl auf dauer leichter) und zum anderen kann ich somit vorrübergehend der IP-Adresse von z.B. Nextcloud via DNS den Namen zuweisen den ich haben will. Noch dazu, ist es relativ übersichtlich wenn ich jetzt im DHCP nachschaue habe ich ein VLAN in welchem die jeweiligen Applikationen enthalten sind. Das werde ich auch für die Zukunft so lassen - denk ich mal. Das mit dem DNS ist jetzt nur vorrübergehend bis meine Fritz!Box 7590 da ist! Nochmal vielen lieben Dank an alle die mir hier geholfen haben und Tipps gegeben haben! Ich bin gern hier im Forum unterwegs und werde auch in Zukunft versuchen anderen unRAIDern mit meinem Wissen so gut wie es geht unter die Arme zu greifen. Wünsche euch allen ein schönes Wochenende - vielleicht ist es bei euch ja nicht so verregnet wie bei uns in Salzburg 😒 Beste Grüße, Dominic
  16. 3 points
    <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> this tag is what you need to change
  17. 3 points
    The Ultimate UNRAID Dashboard Version 1.4 is here! This is a MASSIVE 😁 update adding many new powerful features, panels, and hundreds of improvements. The main goal of this release is to increase usability and simplify the dashboard so more people can modify it without getting lost in REGEX and having to ask for support as often. As a result, the most complex queries have been rewritten in a way that is clear and transparent, while still remaining just as powerful. Finally, I have added requested features and threw in some new bells and whistles that I thought you guys would like. As always, I'm here if you need me. ENJOY! Highlights: Keep it Simple - Added User Transparency Back Into Dashboard by Removing REGEX on Certain Panels This Will Make it Extremely Easy to Customize the Dashboard to Your Specific Needs/Requirements You Can Now See Exactly How Certain Panels are Derived and Making Modifications is Self Explanatory This Will Also Make Support MUCH Easier For Everyone! Multi-Host Support Change the Host Drop Down Variable and Monitor Another Host Instantly Added the Host Variable to Every Single Panel The Entire Dashboard Can Now Monitor Any Host in Real Time With a Single Variable Change Via Drop Down Menu! Initial Support For Non Server Hardware Initial Support For Sensors Plugin to Monitor Non Server Hardware (Only Used If IPMI Is NOT Supported on Your Hardware) Requires New "sensors" Plugin (See Dependencies Section on Post #1) Added Template Sensor Queries (Disabled By Default) You Will Need to Modify These Example Queries As Required For Your Non Server Hardware These are Just Building Blocks to Help Those Who Cannot Use IPMI Please See the Forum Topic For Detailed Help! Initial Support For Unassigned Drives Added Ability For Unassigned Drives Via 2 Variables (Serial and Path) Added Unassigned Drives to Panels Throughout Dashboard Where Applicable Default Dashboard Comes With Only 1 Unassigned Path Variable You Will Need to Add Additional Path Variables to Include/Exclude Multiple Unassigned Drive Paths Support For Multiple Cache Drives in DiskIO Graphs Support For Multiple Unassigned Drives in DiskIO Graphs Monitoring of ALL System Temps Monitoring of ALL System Voltages Monitoring of ALL System Fans Monitoring of RAM DIMM Temps Further GUI Refinements to Assist with Smaller Resolution Monitors Variable Changes Removed Redundant And/Or Unneeded Variables Cleans Up and Reduces Clutter Of Upper Variable Menu Re-Ordered Variables Smaller Length Variables Are Now First (Typically Row 1) Longer Length Variables Are Now Last (Typically Row 2) Standardized Dashboard to Use Single Datasource Instead of 3 Before: Telegraf/Disk/UPS After: Telegraf This Also Keeps the Variables Menu Cleaner With Less Clutter (2 Less Variables!) Standardized All Variables Names in Title Case, Logical Prefixes, and Added Underscores to Separate Words Shortened Variable Label Text When/Where Possible Changed All Panels to Use Default Min Interval Setting of Datasource Set Once in on Datasource and All Panels Not Explicitly Set Will Auto Adjust Only Those Panels Different From the Default Min Interval Are Now Explicitly Set (Example: Array Growth) Modified and Added New Auto-Refresh Time Interval Options In Drop Down Menu Now: 30s,1m,5m,10m,15m,30m,1h,2h,6h,12h,1d Replaced All "Retro LCD" Bar Gauges With "Basic" (Cleaner GUI With Unified Aesthetic) Adjusted All Panel Thresholds to Be More Accurate on Color Changes (See Bug Fixes) Added GROUP BY "time($_interval)" To All Panels Increases Overall Dashboard Performance Removed Min/Max/Avg Values From All Line Graphs to Decrease Screen Width Requirements Shows More Data on Smaller Screens Corrected Various Grammatical Errors Bug Fixes and Optimizations Hundreds of Other Quality of Life and Under the Hood Improvements You Can't See Them, But They're There...In Code...LOTS OF CODE Bug Fixes: Changed Remaining Panels Using FROM "autogen" to "default" Updated All Aliases to Match Panel Names There Were Still Some Discrepancies Adjusted All Threshold Values to Be 1/10th Below Desired Measurement Forces Color Change on Next Whole Number Example: 90% Is Supposed to Be Red, But Would Still Show Preceding Orange Threshold Color (89.9% Resolves This) New Panels: Overwatch System Temps Monitors ALL System Temps (Including CPU) Uses IPMI Unit "degrees_c" to Pull Values Instead of Individual Names Added/Modified Panel Description System Power Monitors ALL System Voltages Uses IPMI Unit "volts" to Pull Values Instead of Individual Names Added/Modified Panel Description Fan Speeds (Replaces Fan Speed Gauges): Monitors ALL System Fans Uses IPMI Unit "rpm" to Pull Values Instead of Induvial Names Also Fixes Issue Where Labels Were Not Being Dynamically Generated Added/Modified Panel Description RAM Load: Show Current Ram Usage % Replaces RAM Used % Disk I/O Unassigned I/O (Read & Write) Adds Support to Monitor Disk I/O of Unassigned Drives Does Not Show Min/Max/Avg Values On Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Added Ability to Show Multiple Unassigned Drives by Serial Number Disk Overview Unassigned Storage Adds Support to Monitor Storage of Unassigned Drives Detailed Server Performance RAM DIMM Temps Adds Support to Monitor RAM DIMM Temps Uses IPMI & REGEX Panel Changes: Overwatch ALL Subpanels Overhauled Look and Feel Array Total Added Sparkline Graph Array Utilized Added Sparkline Graph Array Available Added Sparkline Graph Array Utilized % Added Sparkline Graph Cache Utilized Added Sparkline Graph Cache Utilized % Added Sparkline Graph CPU Load Added Sparkline Graph RAM Load Added Sparkline Graph 1GbE Network Renamed Panel Changed to Orientation to Vertical Added Sparkline Graph 10GbE Network Renamed Panel Changed to Orientation to Vertical Added Sparkline Graph Array Growth (Year) Renamed Panel Previously Named "Array Growth (Annual)" DISK I/O Cache I/O (Read & Write) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Added Ability to Show Multiple Cache Drives by Serial Number Array I/O (Read) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Array I/O (Write) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Disk Overview Array Disk Storage Added Used % Field Now Used to Indicate Drive Free Space By Color Modified Thresholds to Be More Accurate Total Array Storage Renamed Panel Previously Named "Array Storage" Added Used % Field Now Used to Indicate Drive Free Space By Color Modified Thresholds to Be More Accurate Drive Temperatures Renamed Panel Formerly "Drive Temperatures (Celsius)" Added Support For Unassigned Drives Detailed Server Performance Network Interfaces (RX) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network Interfaces (TX) Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network 1GBe Renamed Panel Formerly "Network 1GBe (eth0)" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Network 10GBe Renamed Panel Formerly "Network 10GBe (eth2)" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens RAM Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens CPU Package Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens CPU 01 Load Renamed Panel Formerly "CPU 01" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Removed REGEX and Manually Set Cores Individually Increases Supportability Makes it Easier For Novice Users by Increasing Query Transparency Ensures Tags Stay Ordered Numerically (1,10,11...2,20,21... Is Now 1,2,...10...20...) Renamed Each Core With +1 Array Order Naming (Core 00 Now = Core 01...) CPU 02 Load Renamed Panel Formerly "CPU 02" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Removed REGEX and Manually Set Cores Individually Increases Supportability Makes it Easier For Novice Users by Increasing Query Transparency Ensures Tags Stay Ordered Numerically (1,10,11...2,20,21... Is Now 1,2,...10...20...) Renamed Each Core With +1 Array Order Naming (Core 00 Now = Core 01...) CPU 01 Core Load Changed Bar Gauge Type From "Retro LCD" to "Basic" Changed Bar Gauge Orientation to Vertical CPU 02 Core Load Changed Bar Gauge Type From "Retro LCD" to "Basic" Changed Bar Gauge Orientation to Vertical Fan Speeds Renamed Panel Formerly "IPMI Fan Speeds" Removed Min/Max/Avg Values From Line Graph to Decrease Screen Width Requirements Shows More Data on Smaller Screens Updated Panel Descriptions: Overwatch System Temps Note: Uses IPMI System Power Note: Uses IPMI Fan Speeds Note: Uses IPMI Array Total: Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Utilized Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Available Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Utilized % Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Growth (Day) Note: Change Path to "mnt/user" if Cache Drive is Not Present Array Growth (Week) Note: Query Options > Min Interval - Must Match on Week/Month/Year To Stay In Sync Set to 2 Hours by Default For Performance Reasons) - Change Path to "mnt/user" if Cache Drive is Not Present\ Disk Overview Array Disk Storage Note: Uses Variable Array Total Storage Note: Change Path to "mnt/user" if Cache Drive is Not Present Unassigned Storage Note: Uses Variable Drive S.M.A.R.T. Health Summary Removed Description Drive Life Removed Description Detailed Server Performance CPU 01 Core Load Removed Description CPU 02 Core Load Removed Description RAM DIMM Temps Note: Uses IPMI & REGEX Removed/Converted/Deprecated Panels: Overwatch CPU 01 Temp CPU 02 Temp RAM Free % Fan Speed Gauges Variables: New Drives_Unassigned Used to Select Unassigned Drives(s) From Drop Down Menu Path_Unassigned Used to Set a Single Unassigned Drive Path For Inclusion/Exclusion in Drive Panels Add Additional Unassigned Path Variables to Include/Exclude Additional Unassigned Drive Paths Renamed Host Formerly "host" Datasource_Telegraf Formerly "telegrafdatasource" CPU_Threads Formerly "cputhreads" UPS_Max_Watts Formerly "upsmaxwatt" UPS_kWh_Price Formerly "upskwhprice" Currency Formerly "currency" Drives_Flash Formerly "flashdrive" Drives_Cache Formerly "cachedrives" Drives_Parity Formerly "paritydrives" Drives_Array Formerly "arraydrives" Deprecated diskdatasource upsdatasource See Post Number 1 For the New Version 1.4 JSON File!
  18. 3 points
    The time has nearly come. Just finishing up documentation.
  19. 3 points
    Ja stimmt schon, aber wie finanzieren die sich denn. Ich mein schau dir TMDB an. Deren Seite sieht nicht so aus als seien die arm an fähigen Entwicklern. EDIT: Aha, von TiVo kommt die Kohle (Quelle) und die lizenzieren die Daten an andere Firmen weiter. So viel zu "Community". Ich will gefälligst bezahlt werden für meine Updates in die Datenbank. Waren bestimmt 10 Sachen oder so ^^
  20. 3 points
    Unraid 6.9.x est disponible en français dans sa version Beta (actuellement Beta29). Malgré son nom "beta", il est très stable et est en Beta depuis très longtemps. Je suis sur les beta depuis qu'ils sont sorti et pas de soucis.
  21. 3 points
    to get german the interface in german language add the following to papermerge.conf.py in appdata folder LANGUAGE_CODE = "de-DE" to get german language for OCR go to docker shell and type apt-get install tesseract-ocr-deu and change the following in papermerge.conf.py like this OCR_DEFAULT_LANGUAGE = "deu" OCR_LANGUAGES = { "deu": "Deutsch", } And voila! German interface and OCR.
  22. 3 points
    I would suggest removing the plug in, then moving everything. I'm working on getting a Dev box up and running so I can continue working on this for the new Beta.
  23. 3 points
    It's not the socks credentials you need, it's your main username (pXXXXXXX) & password for PIA.
  24. 3 points
    Because its been asked a few times - Yes i am working on WireGuard support now for PIA, its going ok but there is a fair bit to integrate it with the existing code so it may take a little while to code, but hopefully should be done fairly 'soon' (trademark limtech 🙂). WireGuard users using other VPN providers (non PIA) - Question for you, is your wireguard config file static, or is there any dynamically generated parts to it?, please detail what is dynamic (if anything) and your VPN provider name please, im trying to make any code i do VPN provider agnostic as much as possible.
  25. 3 points
    I decided to take a gamble and changed preview to nightly (edited template and replaced preview with nightly under Repository ) and it updated and it no longer complains. was on 3.0.0.3790 under preview and now on 3.0.0.3820 under nightly ... so worked i guess?
  26. 3 points
    Support for multi remote endpoints and PIA 'Next-Gen' networks now complete, see Q19 and Q20 for details:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  27. 3 points
    You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 🧐 ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
  28. 3 points
    Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
  29. 3 points
    OpenVPN support for PIA 'next-gen' network is now in, see Q19 for how to switch from legacy network to next-gen in the link below Multi remote endpoint support is now in see Q20 for how to define multiple endpoints (OpenVPN only) in the link below WireGuard support is now included, see Q21 for how to switch from OpenVPN to WireGuard in the link below https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  30. 2 points
    If you click the “tags” tab it gives you the build number to use. Example- binhex/arch-delugevpn:1.3.15_18_ge050905b2-1-04 FYI- These versions do not support the next-gen servers.
  31. 2 points
    Just FYI, I think I was having a similar issue to Marshalleq. On RC2, when I stopped the unRAID array, which stopped my VM, restarted the unraid array and attempted to restart my VM, it would hang the VM management page (white bottom, no uptime in unraid) and then if you attempted to reboot, it would not reboot successfully. You would have to reset the machine to reboot. However, with RC4, everything seems to be working correctly.
  32. 2 points
    Servus Andreas! 😉 Meines Wissens kannst du á unRAID nur ein Array haben. Wäre mir neu dass man hier mehrere laufen lassen könnte... Wenn du sagst dass du zwei 12TB Platten hast würde ich diese als Parity verwenden, und alle sechs 3TB Platten als Array. Wenn du das so machen solltest würde ich dir aber auch noch eine 1TB SSD als Cache empfehlen. Ich hoffe ich habe deine Frage richtig auffassen und beantworten können 😄 Beste Grüße, Dominic
  33. 2 points
    I have had the same issues on clean installed unraid server, it blocks the webui when using old vpn technology when using PIA, even if you add the line from earlier post. Switch over to the new technology copy new ovpn file, delete everything else in ovpn folder, restart sabnzbd, and it will work like charm. download link described at the support page of deluge vpn. ( Q19. I see that PIA has a new network called 'Next-Gen', does *VPN Docker Images that you produce support this, and if so how do i switch over to it? A19. Yes, it's now fully supported including port forwarding, if you want to switch from PIA's current network to the 'next-gen' network then please generate a new ovpn file using the following procedure:- Please make sure you have the latest Docker Image by issuing a docker pull. Download next-gen ovpn config file - Click on the following link and then click on 'View OpenVPN Configurations' , please download a ovpn file for next-gen:- https://www.privateinternetaccess.com/pages/download# Extract the zip and copy ONE of the ovpn files and any other certs etc to /config/openvpn/, ensuring you either rename the extension or delete the old current-gen network ovpn file. Restart the container and monitor /config/supervisord.log file for any issues.) thnx for all your hard works, Binhex.
  34. 2 points
    Ich kann dir leider bei Macinabox nicht weiterhelfen evtl aber einer dieser user: @angelstriker, @thilo, @wubbl0rz, @ipxl Gibt schon einen Thread hier wegen Macinabox nur leider zu einem anderen Thema: Klick
  35. 2 points
    Feature Requests from Reddit: - determine which disks were involved and spin them down after script execution - multiple path support (to scan movies and tv shows through one script) realised - obtain recent tv shows through Plex database and add the X next episodes to the cache (alternative request: move complete episodes to SSD cache which could be part of an additional script)
  36. 2 points
    After you accomplish that task, IMMEDIATELY take a new flash drive backup, and destroy or mark for destruction any previous flash drive backups. Reason being, if, in the fog of war, you have a failure and use an older flash backup to get up and running, it is very possible to make the array think that data drive should still be in the parity position, and overwrite it irrecoverably erasing anything that was on the disk.
  37. 2 points
    I just got done printing and fitting these together for my new build. I'm going to have 2 of these (one with 2TB drives, one with 8TB drives), but it was dirt cheap and a lot easier than finding a case that will fit 8 3.5" drives. Can be used with or without the fan shroud on the front, but I had 2 Fractal Design 120mm fans in my drawer unused. They'll be powered by a 240w ATX power supply I had laying around with a switch put between the green and black pin to turn on or off and a 1M SAS -> 4 SATA cable plugged into an IBM ServeRAID M5015. If anyone's interested in the STL files for this, I'll throw them up on thingiverse.
  38. 2 points
    That's exactly your problem problem. Driver version 440.59 is not supported after PMS 1.20.2. You need at least driver version 450.51 on linux. If you don't want to update to a beta release (thanks @MowMdown 😄 Didn't notice that he had a non-beta prebuilt!) head on over to the link below and get the "Unraid Custom nVidia builtin v6.8.3" version and copy the 8 files from the ZIP onto your flash drive. Don't forget to make a flash backup before doing so! You might have a new GPU UUID after upgrading but you can still get that from the LSIO Nvidia plugin page, or grab the Unraid Kernel Helper and it will show there too.
  39. 2 points
    Certainly well deserved with your great contributions Sent from my iPhone using Tapatalk
  40. 2 points
    testing is now over, looks like its solid enough for me to release as latest, images now built for all VPN images i produce, please remove tag ':test' from the repository to pull down 'latest' again and 'force update' to ensure it is the latest image thats on disk. If you wish to switch from openvpn to wireguard then please see Q21 from the following link:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  41. 2 points
    guinea pig time again - wireguard support now in, if you are interested then see here:- https://forums.unraid.net/topic/44109-support-binhex-delugevpn/?do=findComment&comment=433617
  42. 2 points
    B550 Motherboards is not good for iommu groups. I don't know if a bios correct this. Choose an x570 it's better. Envoyé de mon HD1913 en utilisant Tapatalk
  43. 2 points
    @Squid I turned 50 on September 26. The next day I literally woke up feeling tired like I worked out the day before. Lol All parts still intact.
  44. 2 points
    Prebuilt images for Unraid v6.9.0beta30 now in the first post on the bottom. As always prebuilt with: nVidia nVidia & DVB (LibreELEC) nVidia & ZFS ZFS iSCSI
  45. 2 points
    5.9RC8-20201005 update to 5.9RC8 updated paragon ntfs3 v7 updated dax virtio improvements update iommu amd improvements to pcie aer nvidia driver Persistence mode enabled at boot for lower idle power update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) 5da83ae zfs drivers a76e4e6 tbsecp3 drivers 3cdeaee asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12 version for 6.8.3 and 6.9 beta30
  46. 2 points
    Since switching over to next gen PIA servers I have been seeing very slow website DNS resolves when using my browser that has its proxy settings pointed to the Deluge privoxy port. After some trouble shooting I decided to remove all the prefilled PIA DNS addresses from Name_Servers in the Deluge templet and only use the Cloudflare addressed. Doing this resolved me slow resolve issue. Turns out that PIA next gen has a new set of server addresses for it as well, link to them below. https://www.privateinternetaccess.com/helpdesk/kb/articles/next-generation-dns-custom-configuration I added the new PIA DNS addresses to Name_Servers and everything is working normally again. Not sure if I missed this in binhex's instructions somewhere but I suggest that everyone who has moving over to PIA next gen also update the Name_Servers from PIA legacy to PIA nextgen addresses. Still not sure why I get slow resolves when the I have PIA DSN addresses listed in Name_Servers. Dose anyone else have a browser pointed at the privoxy port and seeing very slow site resolves?
  47. 2 points
    The 'plex' format uses the Plex Naming Standard (it is not getting path from plex). See https://www.filebot.net/forums/viewtopic.php?t=4116
  48. 2 points
  49. 2 points
  50. 2 points
    So then would you recommend removing the established connection. Then enter a secure share, use my unraid account credentials. Then after that I can entire all secure shares (that that unraid account has access too) and public shares without issue?