Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/23/20 in Posts

  1. 6 points
    sadly you probably just got lucky, that is unless pia are finally sorting their sh*t out, in any case the multi remote line code should help, as mentioned above i am working on next-gen now and its going well, i MIGHT have something for people to test by the end of today. p.s. LOTS more endpoints support port forwarding on next-gen! - like dozens!
  2. 5 points
    Added prebuilt images for Unriad v6.9.0beta29 to the bottom of the first post. Prebuilt images include: nVidia nVidia & DVB nVidia & ZFS ZFS iSCSI
  3. 3 points
    I'm using Unraid for a while now and collected some experience to boost the SMB transfer speeds: 1.) Choose the right CPU The most important part is to understand that SMB is single-threaded. This means SMB uses only one CPU core to transfer a file. This is valid for the server and the client. Usually this is not a problem as SMB does not fully utilize a CPU core (except of real low powered CPUs). But Unraid adds, because of the ability to split shares across multiple disks, an additional process called SHFS and its load raises proportional to the transfer speed, which could overload your CPU core. So the most important part is, to choose the right CPU. At the moment I'm using an i3-8100 which has 4 cores and 2257 single thread passmark points: And since I have this single thread power I'm able to use the full bandwith of my 10G network adapter which was not possible with my previous Intel Atom C3758 (857 points). I even was not able to reach 1G speeds while a parallel Windows Backup was running (see next section to bypass this limitation). Now I'm able to transfer thousands of small files and parallely transfer a huge file with 250 MB/s. With this experience I suggest a CPU that has around 1400 single thread passmark points to fully utilize a 1G ethernet port. As an example: The smallest CPU I would suggest for Unraid is an Intel Pentium Silver J5040. P.S. Passmark has a list sorted by single thread performance for desktop CPUs and server CPUs. 2.) Bypass single-thread limitation The single-thread limitation of SMB and SHFS can be bypassed through opening multiple connections to your server. This means connecting to "different" servers. The easiest way to accomplish that, is to use the ip-address of your server as a "second" server while using the same user login: \\tower\sharename -> best option for user access through file explorer as it is automatically displayed \\10.0.0.2\sharename -> best option for backup softwares, you could map it as a network drive If you need more connections, you can add multiple entries to your windows hosts file (Win+R and execute "notepad c:\windows\system32\drivers\etc\hosts"): 10.0.0.2 tower2 10.0.0.2 tower3 Results If you now download a file from your Unraid server through \\10.0.0.2 while a backup is running on \\tower, it will reach the maximum speed while a download from \\tower is massively throttled: 3.) Bypass Unraid's SHFS process If you enable access directly to the cache disk and upload a file to //tower/cache, this will bypass the SHFS process. Beware: Do not move/copy files between the cache disk and shares as this could cause data loss! The eligible user account will be able to see all cached files, even those from other users. Temporary Solution or "For Admins only" As Admin or for a short test you could enable "disk shares" under Settings -> Global Share Settings: By that all users can access all array and cache disks as SMB shares. As you don't want that, your first step is to click on each Disk in the WebGUI > Shares and forbid user access, except for the cache disk, which gets read/write access only for your "admin" account. Beware: Do not create folders in the root of the cache disk as this will create new SMB Shares Safer Permanent Solution Use this explanation. Results In this thread you can see the huge difference between copying to a cached share or copying directly to the cache disk. 4.) Enable SMB Multichannel + RSS SMB Multichannel is a feature of SMB3 that allows splitting file transfers across multiple NICs (Multichannel) and multiple CPU Cores (RSS) since Windows 8. This will raise your throughput depending on your amount of NICs, NIC bandwidth, CPU and used settings: This feature is experimental SMB Multichannel is considered experimental since its release with Samba 4.4. The main bug for this state is resolved in Samba 4.13. The Samba developers plan to resolve all bugs with 4.14. Unraid 6.8.3 contains Samba 4.11. This means you use Multichannel on your own risk! Multichannel for Multiple NICs Lets say your mainboard has four 1G NICs and your Client has a 2.5G NIC. Without Multichannel the transfer speed is limited to 1G (117,5 MByte/s). But if you enable Multichannel it will split the file transfer across the four 1G NICs boosting your transfer speed to 2.5G (294 MByte/s): Additionally it uses multiple CPU Cores which is useful to avoid overloading smaller CPUs. To enable Multichannel you need to open the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf And add the following to it: server multi channel support = yes Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Then restart the Samba service with this command: samba restart Eventually you need to reboot your Windows Client, but finally its enabled and should work. Multichannel + RSS for Single and Multiple NICs But what happens if you're server has only one NIC. Now Multichannel is not able to split something, but it has a sub-feature called RSS which is able to split file transfers across multiple CPU cores with a single NIC: Of course this feature works with multiple NICs: And this is important, because it creates multiple single-threaded SMB processes and SHFS processes which are now load balanced across all CPU cores, instead of overloading only a single core. So if your server has slow SMB file transfers while your overall CPU load in the Unraid WebGUI Dashboard is not really high, enabling RSS will boost your SMB file transfer to the maximum! But it requires RSS capability on both sides. You need to check your servers NIC by opening the Unraid Webterminal and entering this command (could be obsolete with Samba 4.13 as they built-in an RSS autodetection😞 egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 Now you can check your Windows 8 / Windows 10 client by opening Powershell as Admin and enter this command: Get-SmbClientNetworkInterface It must return "True" for "RSS Capable": Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name --------------- ----------- ------------ ----- ----------- ------------- 11 True False 10 Gbps {10.0.0.10} Ethernet 3 Now, after you are sure that RSS is supported on your server, you can enable Multichannel + RSS by opening the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf Add the following and change 10.10.10.10 to your Unraid servers IP and speed to "10000000000" for 10G adapter or to "1000000000" for a 1G adapter: server multi channel support = yes interfaces = "10.10.10.10;capability=RSS,speed=10000000000" If you are using multiple NICs the syntax looks like this (add RSS capability only for supporting NICs!): interfaces = "10.10.10.10;capability=RSS,speed=10000000000" "10.10.10.11;capability=RSS,speed=10000000000" Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Now restart the SMB service: samba restart Does it work? After rebooting your Windows Client (seems to be a must), download a file from your server (so connection is established) and now you can check if Multichannel + RSS works by opening Windows Powershell as Admin and enter this command: Get-SmbMultichannelConnection -IncludeNotSelected It must return a line similar to this (a returned line = Multichannel works) and if you want to benefit from RSS then "Client RSS Cabable" must be "True": Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 10.10.10.100 10.10.10.10 11 13 True False If you are interested in test results, look here. 5.) smb.conf Settings Tuning At the moment I'm doing intense tests with different SMB config settings found on different websites: https://wiki.samba.org/index.php/Performance_Tuning https://wiki.samba.org/index.php/Linux_Performance https://wiki.samba.org/index.php/Server-Side_Copy https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/speed.html https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html https://lists.samba.org/archive/samba-technical/attachments/20140519/642160aa/attachment.pdf https://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf https://www.samba.org/samba/docs/current/man-html/ (search for "vfs") https://lists.samba.org/archive/samba/2016-September/202697.html https://codeinsecurity.wordpress.com/2020/05/18/setting-up-smb-multi-channel-between-freenas-or-any-bsd-linux-and-windows-for-20gbps-transfers/ https://www.snia.org/sites/default/files/SDC/2019/presentations/SMB/Metzmacher_Stefan_Samba_Async_VFS_Future.pdf https://www.heise.de/newsticker/meldung/Samba-4-12-beschleunigt-Verschluesselung-und-Datentransfer-4677717.html I will post my results after all tests have been finished. By now I would say it does not really influence the performance as recent Samba versions are already optimized, but we will see. 6.) Choose a proper SSD for your cache You could use Unraid without an SSD, but if you want fast SMB transfers an SSD is absolutely required. Else you are limted to slow parity writes and/or through your slow HDD. But many SSDs on the market are not "compatible" for using it as an Unraid SSD Cache. DRAM Many cheap models do not have a DRAM Cache. This small buffer is used to collect very small files or random writes before they are finally written to the SSD and/or is used to have a high speed area for the file mapping-table. In Short, you need DRAM Cache in your SSD. No exception. SLC Cache While DRAM is only absent in cheap SSDs, SLC Cache can miss in different price ranges. Some cheap models use a small SLC cache to "fake" their technical data. Some mid-range models use a big SLC Cache to raise durability and speed if installed in a client pc. And some high-end models do not have an SLC Cache, as their flash cells are fast enough without it. Finally you are not interested in SLC Cache. You are only interested in continuous write speeds (see "Verify Continuous Writing Speed") Determine the Required Writing Speed But before you are able to select the right SSD model you need to determine your minimum required transfer speed. This should be simple. How many ethernet ports do you want to use or do you plan to install a faster network adapter? Lets say you have two 1G ports and plan to install a 5G card. With SMB Multichannel its possible to use them in sum and as you plan to install a 10G card in your client you could use 7G in total. Now we can calculate: 7G * 117.5 MByte/s (real throughput per 1G ethernet) = 822 MByte/s and by that we have two options: buy one M.2 NVMe (assuming your motherboard has such a slot) with a minimum writing speed of 800 MByte/s buy two or more SATA SSDs and use them in a RAID0, each with a minimum writing speed of 400 MByte/s Verify Continuous Writing Speed of the SSD As an existing "SLC Cache" hides the real transfer speed you need to invest some time to check if your desired SSD model has an SLC cache and how much the SSD throttles after its full. A solution could be to search for "review slc cache" in combination with the model name. Using the image search could be helpful as well (maybe you see a graph with a falling line). If you do not find anything, use Youtube. Many people out there test their new ssd by simply copying a huge amount of files on it. Note: CrystalDiskMark, AS SSD, etc Benchmarks are useless as they only test a really small amount of data (which fits into the fast cache). Durability You could look for the "TBW" value of the SSD, but finally you won't be able to kill the SSD inside the warranty as long your very first filling of your unraid server is done without the SSD Cache. As an example a 1TB Samsung 970 EVO has a TBW of 600 and if your server has a total size of 100TB you would waste 100TBW on your first fill for nothing. If you plan to use Plex, think about using the RAM as your transcoding storage which would save a huge amount of writes to your SSD. Conclusion: Optimize your writings instead of buying an expensive SSD. NAS SSD Do not buy "special" NAS SSDs. They do not offer any benefits compared to the high-end consumer models, but cost more. 7.) More RAM More RAM means more caching and as RAM is even faster than the fastest SSDs, this adds additional boost to your SMB transfers. I recommend installing two identical (or more depening on the amount of slots) RAM modules to benefit from "Dual Channel" speeds. RAM frequency is not as important as RAM size. Read Cache for Downloads If you download a file twice, the second download does not read the file from your disk, instead it uses your RAM only. The same happens if you're loading covers of your MP3s or Movies or if Windows is generating thumbnails of your photo collection. More RAM means more files in your cache. The read cache uses by default 100% of your free RAM. Write Cache for Uploads Linux uses by default 20% of your free RAM to cache writes, before they are written to the disk. You can use the Tips and Tweaks Plugin to change this value or add this to your Go file (with the Config Editor Plugin😞 sysctl vm.dirty_ratio=20 But before changing this value, you need to be sure to understand the consequences: Never use your NAS without an UPS if you use write caching as this could cause huge data loss! The bigger the write cache, the smaller the read cache (so using 100% of your RAM as write cache is not a good idea!) If you upload files to your server, they are 30 seconds later written to your disk (vm.dirty_expire_centisecs) Without SSD Cache: If your upload size is generally higher than your write cache size, it starts to cleanup the cache and in parallel write the transfer to your HDD(s) which could result in slow SMB transfers. Either you raise your cache size, so its never filled up, or you consider totally disabling the write cache. With SSD Cache: SSDs love parallel transfers (read #6 of this Guide), so a huge writing cache or even full cache is not a problem. But which dirty_ratio value should you set? This is something you need to determine by yourself as its completely individual: At first you need to think about the highest RAM usage that is possible. Like active VMs, Ramdisks, Docker containers, etc. By that you get the smallest amount of free RAM of your server: Total RAM size - Reserved RAM through VMs - Used RAM through Docker Containers - Ramdisks = Free RAM Now the harder part: Determine how much RAM is needed for your read cache. Do not forget that VMs, Docker Containers, Processes etc load files from disks and they are all cached as well. I thought about this and came to this command that counts hot files: find /mnt/cache -type f -amin -86400 ! -size +1G -exec du -bc {} + | grep total$ | cut -f1 | awk '{ total += $1 }; END { print total }' | numfmt --to=iec-i --suffix=B It counts the size of all files on your SSD cache that are accessed in the last 24 hours (86400 seconds) The maximum file size is 1GiB to exclude VM images, docker containers, etc This works only if you hopefully use your cache for your hot shares like appdata, system, etc Of course you could repeat this command on several days to check how it fluctuates. This command must be executed after the mover has finished its work This command isn't perfect as it does not count hot files inside a VM image Now we can calculate: 100 / Total RAM x (Free RAM - Command Result) = vm.dirty_ratio If your calculated "vm.dirty_ratio" is lower than 5% (or even negative), you should lower it to a value between 0 and 5 and buy more RAM. between 5% and 20%, set it accordingly, but you should consider buying more RAM. between 20% and 90%, set it accordingly If your calculated "vm.dirty_ratio" is higher than 90%, you are probably not using your SSD cache for hot shares (as you should) or your RAM is huge as hell (congratulation ^^). I suggest not to set a value higher than 90. Of course you need to recalcuate this value if you add more VMs or Docker Containers.
  4. 3 points
    Support for multi remote endpoints and PIA 'Next-Gen' networks now complete, see Q19 and Q20 for details:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  5. 3 points
    OK Testing is now over, i am satisfied that PIA Next-Gen network using OpenVPN is working well enough to push this to production, so the changes have now been included in the 'latest' tagged image, if you have been using the 'test' tagged image then please drop ':test' from the repository name and click on apply to pick up the latest again. If you want to switch from current to next-gen then please generate a new ovpn file using the following procedure:- Note:- The new image will still support current (or legacy) PIA network for now, so you can use either, dictated to by the ovpn file.
  6. 3 points
    Don't do that unless you have bonded the UPS ground to earth through some other means. Any current imbalance between the equipment powered by the UPS and other equipment still grounded will travel through whatever connections are still there, probably your network at the very least. Turn off the circuit at the breaker, or temporarily plug the UPS into a switched outlet or power strip, but don't just yank the plug out of the wall.
  7. 3 points
    Hello, I would like to thank the unraid team that created the product For me it is simply awesome I'm a one year Unraid user and I want to share, a little, my thoughts about the server. I had some prior experience with linux(debian/ubuntu based mostly) but I always wanted something like Unraid that has both the liberty for creating scripts/custom stuff from linux console and to have predefined stuff in an interface to get repetitive things done fast (here I'm thinking dockers and community applications). Intentionally I left out the virtualisation because for me this was a selling point I was in a period where I tried to do that on my own with Fedora and stuff and it was a big hassle ... when I tried unraid and it did that out of the box ... I never looked back Bottom line now I have dockers and vm-s for everything I wanted and now with the added pools my life is even better. PS. Many many thanks to the Unraid team.
  8. 3 points
    OK guys, multi remote endpoint support is now in for this image please pull down the new image (this change will be rolled out to all my vpn images shortly). What this means is that the image will now loop through the entire list, for example, pia port forward enabled endpoints, all you need to do is edit your ovpn config file and add the remote endpoints at the top and sort into the order you want them to be tried, an example pia ovpn file is below (mine):- remote ca-toronto.privateinternetaccess.com 1198 udp remote ca-montreal.privateinternetaccess.com 1198 udp remote ca-vancouver.privateinternetaccess.com 1198 udp remote de-berlin.privateinternetaccess.com 1198 udp remote de-frankfurt.privateinternetaccess.com 1198 udp remote france.privateinternetaccess.com 1198 udp remote czech.privateinternetaccess.com 1198 udp remote spain.privateinternetaccess.com 1198 udp remote ro.privateinternetaccess.com 1198 udp client dev tun resolv-retry infinite nobind persist-key # -----faster GCM----- cipher aes-128-gcm auth sha256 ncp-disable # -----faster GCM----- tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 1 crl-verify crl.rsa.2048.pem ca ca.rsa.2048.crt disable-occ I did look at multi ovpn file support, but this is easier to do and as openvpn supports multi remote lines, it felt like the most logical approach. note:- Due to ns lookup for all remote lines, and potential failure and subsequent try of the next remote line, time to initialisation of the app may take longer. p.s. I dont want to talk about how difficult this was to shoe horn in, i need to lie down in a dark room now and not think about bash for a while :-), any issues let me know!.
  9. 2 points
    Ultimate UNRAID Dashboard (UUD) Current Release: Version 1.3 | Version 1.4 In Active Development! Overview: This is my attempt to develop the Ultimate Grafana/Telegraf/InfluxDB dashboard. This entire endeavor started when one of our fellow users @hermy65 posed a simple, but complex question in the below forum topic. I decided to give it a shot, as I am an IT professional, specifically in enterprise data warehouse/SQL server. If you are a Grafana developer, or have had experience building dashboards/panels for UNRAID, please let me know. I would love to collaborate. Version 1.3 Screenshots - Serial Numbers Redacted (Click the Images as They are Very High Resolution): Disclaimer: This is based on my 30 Drive UNRAID Array. So this shows an example of a fully maxed out UNRAID setup with max drives, dual CPUs, Dual NICs, etc. You will/may need to adjust panels & queries to accommodate your individual UNRAID architecture. This is a heavily modified and customized version of GilbN's original off of his tutorial website, with new and original code. As such, he is a co-developer on this version. I have spent many hours custom coding new functionality and features based on that original template. Much has been learned and I am excited to see how far this can go in the future. GilbN has been gracious enough to help support my modded version here as he wrote the back-end. Thanks again! Developers: Primary Developer: @falconexe (USA) UUD Founder | Active Development | Panels | Database Queries | Look & Feel | GUI | Refinement | Support Co-Developer: @GilbN (Europe) Original Template | Back-end | Dynamics | REGEX | Support | Tutorials Contributors: @hermy65 @atribe @Roxedus @SpencerJ @testdasi @ChatNoir @MammothJerk @FreeMan @danktankk Dependencies (Last Updated On 2020-09-24): Docker - InfluxDB Docker - Telegraf Docker Network Type: HOST (Otherwise You May Not Get All Server Metrics) 👉 Create Telegraf Configuration File 👈 (DO THIS FIRST!) Create and Place a File into Directory "mnt/user/appdata/YOUR_TELEGRAF_FOLDER" Enable and Install Telegraf Plugins Telegraf Plugin - [[inputs.net]] Enable in telegraf.config Telegraf Plugin - [[inputs.docker]] Enable in telegraf.config Telegraf Pugin - [[inputs.diskio]] Enable in telegraf.config To Use Static Drive Serial Numbers in Grafana (For DiskIO Queries) Do the Following: Edit telegraf.conf > [[inputs.diskio]] > Add device_tags = ["ID_SERIAL"] > Use ID_SERIAL Flag in Grafana Now Upon Booting, You Don't Have to Worry About SD* Mounts Changing (So Your Graphs Don't Get Messed Up!) You Can Also Set Overrides on the Query Fields to Map the Serial Number to a Common Disk Name Like "DISK01" etc. Telegraf Plugin - [[inputs.smart]] Enable in telegraf.config Also Enable "attributes = true" Bash Into Telegraf Docker and Run "apk add smartmontools" Telegraf Plugin - [[inputs.ipmi_sensor]] Enable in telegraf.config Bash Into Telegraf Docker and Run "apk add ipmitool" Telegraf Pugin - [[inputs.apcupsd]] Enable in telegraf.config Docker Config Add New Path (NOTE: This path has now been merged into Atribe's Telegraf Docker Image. (Thanks @GilbN & @atribe) Post Arguments "/bin/sh -c 'apk update && apk upgrade && apk add ipmitool && apk add smartmontools && telegraf'" Docker - Grafana CA Plugin: IPMI Tools Dashboard Variables (Update These For Your Server): Compatible With: Grafana Unraid Stack @testdasi Docker: https://hub.docker.com/r/testdasi/grafana-unraid-stack Let me know if you have any questions or are having any issues getting this up and running if you are interested. I am happy to help. I haven't been this geeked out about my UNRAID server in a very long time. This is the cherry on top for my UNRAID experience going back to 2014 when I built my first server. Thanks everyone! VERSION 1.3 (Latest) Ultimate UNRAID Dashboard - Version 1.3 - 2020-09-21 (falconexe).json VERSION 1.2 (Deprecated) Ultimate UNRAID Dashboard - Version 1.2 - falconexe.json
  10. 2 points
    Just updated to latest, connected straight away on nextgen. Awesome work!
  11. 2 points
    By default Linux uses 20% free RAM for write cache, this can be adjusted manually or for example with the tips and tweaks plugin.
  12. 2 points
    I've been working on both 1.4 and 1.5 simultaneously. They will still be released separately, but the code overlaps in some areas, so I had to figure some of it out now. Goal is for a super clean and refined Varken/Tautulli/Plex Dash which will be integrated directly into UUD, sporting some of the same falconexe style/customizations (like working growth trending) found in the UUD. @Stupifier Thought You Would Appreciate This Sneak Peek...
  13. 2 points
    Thanks for the update, will rebuild the prebuilt images tomorrow and update the first post. EDIT: Think some updates to the DVB builds are required.
  14. 2 points
  15. 2 points
    @testdasi You may want to consider this. I’m tentatively planning on adding Varken/Plex panels/stat tracking to the Ultimate UNRAID Dashboard (UUD) in version 1.5. Not a guarantee until I get into it, but if I can integrate some/all of it, that would be cool.
  16. 2 points
    I have very little computer experience. Just basic things I use them for such as email, internet, Word, some software we use for our store like accounting and such. Yet, found myself in need of a server but it was unrealistic right now for us to be able to buy anything and pay someone to set it all up. I decided to try to put something together myself. At first I was just going to get a off the shelf NAS, but with research, I decided I could build something much more useful for the same money. I started reading and decided on Unraid. Here on the forum you guys helped me with some hardware questions....I've never built a computer before. It took me some time to get all the parts, but I got them all in during August and got it all put together. Then during the 30 day Unraid trial which just ended yesterday....I was able to setup Unraid, configure everything, have Sonarr, Radarr, Lidarr, all running along with Plex. I have a Win10VM setup with hardware passed through, I have the Wireguard VPN setup and running, Nextcloud syncing to my phone with Reverse Proxy all setup and configured on my own custom domain. Its amazing that someone with no computer knowledge can get this far! Before starting this journey I didnt even know what Unraid was or what a parity was. I owe it to all of you who have helped me here on the forum as well as SpaceInvader for his amazing videos. I want to thank all of you and let you know that I greatly appreciate all of the help and support. You guys are great Its a rock solid server and I'm grateful to have it, we needed it badly. Thank You! Stephen
  17. 2 points
  18. 2 points
    Hat auch den Vorteil, dass man inkrementell mit Hardlinks sichern kann. Da hätte man fast die gewünschten Snapshots. Ich schrieb ja, dass ich die SSD alle 24 Stunden ins HDD Array sichere (einfach mit rsync). Könnte man natürlich auch häufiger machen. Seit 2006 ist RJ45 (IEC 60603-7 8P8C) Teil des 10G Standards (vom 40G Standard mit CAT8 übrigens auch): Die einzige Einschränkung ist dabei CAT5e. Das ist limitiert auf 40-50m, aber wer hat die schon in einem Einfamilienhaus/Wohnung. Und ab CAT6 gibt es gar keine Einschränkung.
  19. 2 points
    I sent a pull request with the new version id to spaceinvaderOne. In the mean time, as a work-around until he has updated his docker, you can change the docker repository of the template to mattti/macinabox . The only change is the working Catalina version.
  20. 2 points
    Plex/Varken is coming... Possibly Planned For 1.5
  21. 2 points
    A consideration for those of us who don't have 43" 4k monitors (which I would imagine to be the significant majority of your audience ). Remove the word "Storage" from the top utilization boxen: On my 24" 1920x1280 monitor, I see a series of "Array Storage ..." "Array Storage ..." because the titles are too long for the box size. The same is true for the Cache storage text: Yes, I can edit the titles (and have done so for the first 2, leaving the rest as examples), but I think that the vast majority of folk will be able to figure out that an "Array Total" measured in "TB" refers to "storage" without having to be explicitly told. And for those that can't, do you really want the support nightmares of them asking eleventy-seven bajillion questions? (He asks checking his post count on this thread, noting it's approaching eleventy-two bajillion... )
  22. 2 points
    I finally got all the UPS stats showing properly. The hold up was something stupid, as is often the case. I installed the Grafana-Unraid-Stack (GUS) from @testdasi while using UUD as the dashboard. For some reason, I could not get the UPS stats to show no matter what I did. Edited telegraf.conf to add [[inputs.acpupsd]], deleted telegraf.conf and forced an update on GUS which supposedly downloaded a new telegraf.conf with apcupsd enabled, etc. Nothing. Then I realized the mistake. Two years ago, I installed Grafana, Influxdb and Telegraf docker containers to play around with setting up an unRAID dashboard. It was taking too much time and I had other priorities so I deleted the docker containers; however, the folders were still in appdata with their contents including telegraf.conf. I had been editing and deleting the wrong file!!! The telegraf.conf file that matters is in appdata/Grafana-Unraid-Stack/telegraf. I deleted the old stand-alone folders and their contents and enabled [[inputs.apcupsd]] editing the telegraf.conf in the correct folder and voila!, UPS stats in UUD. After adding the cost per kwh and the max wattage of the UPS in UUD, everything is now working as it should be with customizations for my hardware. Thanks to @falconexe and @testdasi (and @GilbN) for making this all so simple. I remember what a pain it was two years ago which is why I eventually moved on to other things. Now I have a really good looking dashboard based on an extremely easy to install and use docker container stack without all the manual messing around in myriad config files.
  23. 2 points
    I *was* a plank. having read the run command output, I realise that the names of the folders I assigned in the container were what the GUI was expecting to have in the fields, not the *actual* paths - DOH: This is the GUI now with correctly identified paths and no more missing folders
  24. 2 points
    it looks like ive worked it out i have a i7-5820K cpu with 28 lanes on a asus x99 deluxe with 6 pci-e slots i had the two hba cards in the wrong slots making them pcie 2.0 i shoud of used 1,3,5 but i was using 1,4,6 now my parity estimated speed is 148mb and 23 hours to complete
  25. 2 points
  26. 2 points
    I’ve been running @doron’s script for a few days and it has been perfect so far! Absolutely no unwanted spin ups, and my SAS drives are always sent to standby when needed. Thank you so much to everyone who contributed to this thread, especially @SimonF and @doron (and thanks for crediting me in your script even if I really just suggested something and haven’t written a single line of code 😆). Now let’s push this to the devs and get it included in Unraid! Stay safe and keep up the positive vibes 😉
  27. 2 points
    Wanted to share what I have achieved with this great tool Catalina over passthrough 5700XT and Samsun EVO nvme 970 Ubuntu on regular array drive and a placeholder GT710 for my rtx3080 16 cores each 28GB ram each controlling both with synergy 1 keyboard and 1 mouse Loving my system with all my soul thanks again @SpaceInvaderOne
  28. 2 points
    Sweet! UUD 1.4 will be out soon. @testdasi You may want to specific that UUD version 1.3 is currently what's added. Thanks for officially integrating the UUD as another option within GUS. This is really cool. @SpencerJ Are you aware of the awesome work that @testdasi did to make a single docker solution for InfluxDB/Telegraf/Grafana? He just integrated the Ultimate UNRAID Dashboard. It is basically a single docker solution for everything for users who appreciate/need that kind of out of the box setup. The timing of his work couldn't have been better!
  29. 2 points
    Thanks. It's done. Thank you so much, im gratefull!
  30. 2 points
    Update (23/09/2020): Grafana Unraid Stack changes: Expose Influxdb RPC port and change it to a rarer default value (58083) instead of the original common 8088. Added falconexe's Ultimate UNRAID Dashboard Thanks. It's done. The GUS dashboard is based on Threadripper 2990WX. You will have to customize the default dashboard to suit your own exact hardware. Also give UUD a try to see if you like that layout more.
  31. 2 points
    @testdasi please do Love this
  32. 2 points
    I saw that yesterday and have began coding, i have a working incoming port on next-gen for openvpn, it just needs more work to make it production ready.
  33. 2 points
    We can't do that on this forum, we can only recommend a post so it appears at the top, but only mods can do that, though it's really not a solution in this case, but I'll do it in case another user has the same issue before it's fixed and finds this thread.
  34. 1 point
    none of the VPN ones work for me. I followed guide and copied over openvpn.ovpn file to appdata and it initiates connection but then its stuck on following messages: "Connection in progress, wait 10" it just keeps repeating. any hints?
  35. 1 point
    I have now repacked this stopgap solution as a proper Unraid plugin, so it can be installed / removed / enhanced etc. You can install the plugin from this URL: https://raw.githubusercontent.com/doron1/unraid-sas-spindown/master/sas-spindown.plg Enjoy! Please report any issues (or success...).
  36. 1 point
    Here's the latest on that bottom section...
  37. 1 point
    I would suggest you test by connect power to new add disk, but not connect to system. If no disk drop then it could assume not power issue. You have (1) two HBA (2) mainboard quite old (3) PCIe switch chips onboard (PEX 8608), those may also the cause, in fact I suspect problem on PCIe bridge. Could you further try only use the HBA which haven't drop disk, then remove the other HBA, connect 8 disks to it and 1 disks to mainboard. ( total 9 disks ) ** Pls setting array not auto-start and manual start array in maintenance mode, then perform parity check, this can fullload all disk and minimise the impact even disk drop / system crash .... **
  38. 1 point
    It seems something related to @binhexcoding as this "bug" appears here, too:
  39. 1 point
    I wanted to add your stereo might act as a Roon end point if it's on Roon's list of partners. Otherwise, as dkerlee has noted, a raspberry pi, laptop, or casting from your phone will work as well. Roon also supports building your own Roon server (not exactly unraid related :), but wanted to throw that out there) on a supported NUC via their ROCK software. List of partners who can act as a Roon player (my Lyngdorf is seen as a player when I start Roon on my phone or tablet) https://roonlabs.com/partners ROCK for DIY server as an alternative to running Roon on unraid. https://kb.roonlabs.com/Roon_Optimized_Core_Kit If you have Sonos, you can also use them as end points for direct playback.
  40. 1 point
    USB2 drives are not a requirement but they are still recommended as they tend work more reliably. They also seem to be longer-lived which is thought to probably be because they run much cooler - something you want in a drive left plugged in 24x7. Since Unraid runs from RAM there is no performance advantage over using USB3 except for a few seconds faster boot time as it is loaded into RAM.
  41. 1 point
  42. 1 point
    Here you go: https://www.dropbox.com/s/f3fp04zsgp1g4a0/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz?dl=0 https://www.dropbox.com/s/z381hehf28k3gj5/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz.md5?dl=0 You can either rename and replace the files in /boot/config/plugins/unRAID6-ZFS/packages or run these commands: #Unmount bzmodules and make rw if mount | grep /lib/modules > /dev/null; then echo "Remounting modules" cp -r /lib/modules /tmp umount -l /lib/modules/ rm -rf /lib/modules mv -f /tmp/modules /lib fi #install and load the package and import pools installpkg zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz depmod modprobe zfs zpool import -a
  43. 1 point
    Btw, if you want a standalone Dashboard for NUT. https://grafana.com/grafana/dashboards/10914
  44. 1 point
    Just tried your rTorrent container & tested a radarr addition, got speeds at the level I'd expect when downloading (roughly 8-10Mib/s, which is a far cry from the 600Kib/s I was getting in Deluge😂). I'll have a look at setting that up properly to use for the time being, & keep Deluge on for nextgen testing for now. Edit: spoke too soon - seem to have gone back to having a global DL rate of ~600Kib/s. Must be PIA, but still annoying why others are showing much faster rates in line with their connection speed than I seem to be able to achieve when it's specifically through unraid torrent dockers 😕
  45. 1 point
    As an alternate to IPMI to monitor CPU/System/Aux Temps, you can try the Sensors Plugin. Enable [[inputs.sensors]] in the Telegraf Config (Uncomment It) Bash into the Telegraf Docker and Execute "apk add lm_sensors" Stop All 3 Dockers (Grafana > Telegraf > InfluxDB) If You Want to Keep This Plugin in Perpetuity, You Will Need to Modify Your Telegraf Docker Post Arguments To (Adding lm_sensors): "/bin/sh -c 'apk update && apk upgrade && apk add ipmitool && apk add smartmontools && apk add lm_sensors && telegraf'" Start All 3 Dockers (InfluxDB > Telegraf > Grafana) Let me know if that works for you.
  46. 1 point
    Yes this is possible, I moved in 2018 from a linux installation on an intel nuc to docker on unraid and did not loose anything from my meta data. @SpaceInvaderOne did a video about transferring Plex installations from one container to another (docker). Not completely your case but should help enough to get your head around. Plex2Plex by Spaceinvader One
  47. 1 point
    I'll also be adding dynamic support for segregating/isolating Unassigned Devices (Disks) is this next release!
  48. 1 point
    tmp=$(virsh list --all | grep " vmtest " | awk '{ print $3}') if ([ "x$tmp" == "x" ] || [ "x$tmp" != "xrunning" ]) then echo "VM does not exist or is shut down!" # Try additional commands here... else echo "VM is running!" fi code similar to this in hourly schedule.
  49. 1 point
    You should post the diagnostics (Tools - Diagnostics), as it gives far more information. But, unless you're running with very little RAM, this Aug 16 10:06:44 Unraid root: Fix Common Problems: Warning: Rootfs file is getting full (currently 77 % used) is something that is effectively an error
  50. 1 point
    @jovdk this video is pretty helpful when first learning about unraid. bear in mind, its a bit older and his tutorials now are much better! there are ways to make very high performance read/write folders or systems, but that's more hardware and workflow based. hope this video helps!