Jump to content

mgutt

Moderators
  • Posts

    11,351
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. Nope. My configuration contains only default values except the Multichannel part as explained by this Guide. SMB 3.1.1 is automatically used through my Windows 10 client (remove your min protocoll setting and use "Get-SmbConnection" in Powershell to verify the usage of 3.1.1). But even without the Multichannel setting, my performance was already good. It was only bad with my old CPU. What is your hardware setup? EDIT: Ok found it here. Regarding my single-thread theory your Pentium G4560 should be sufficient. Use this little checklist (replace "sharename" with one of your cached shares): Is your SSD cache fast enough (with full SLC cache)? Open the Webterminal and execute htop and leave it open. Open a second Webterminal and generate a huge random file on your cache with dd if=/dev/urandom iflag=fullblock of=/mnt/cache/sharename/10GB.bin bs=1GiB count=10 Download this file from \\tower\sharename to your Windows Client, note the speed and check the load/processes in htop Download it again and check the read speed of your SSD in the Unraid WebGUI. Did it fall to zero? (Should, as the 10GB file should fit in your RAM = Linux Cache) Download it from \\server.ip\sharename (#2 of this guide), note the speed and check the load/processes in htop Enable (temporarily) Disk Shares in Global Settings and download it from \\server.ip\cache\sharename (#3 of this guide to bypass SHFS), note the speed and check the load/processes in htop, disable Disk Shares afterwards (for Security Reasons as described) Disable Hyperthreading in your BIOS and repeat this test. My theory is that SMB indeed uses a different thread, but the same core (randomly) as the SHFS process which would make Hyperthreading absolute useless for Unraid. But until now nobody helped me to verify this. What is your conclusion? What happens with the load and processes in the different tests and how fast can you go without SHFS?
  2. Do you have bigger photos? They are really small
  3. Are you? He is only able to access \\tower\cache\sharename and not \\tower\cache. I will realize this too. This is really safer. Of course the Guide will be updated, too.
  4. Client Test Results The client clearly benefits of RSS. The 4th CPU core is not overloaded anymore, the Client runs absolutely smooth, now. Love it. RSS disabled (Client lags) RSS enabled (no lags) Conclusion RSS is a must. It helps distributing the load evenly across the clients and servers CPU. By that the client and server are able to transfer faster. Once set up, it works automatically with all Windows Clients since Windows 8. Need more Performance? Read my Guide: RSS Server-Load-Test.zip
  5. @falconexe Aaaaaaahhhh..... yesssss! I found a solution 😱 RSS is True 🥳 Get-SmbMultichannelConnection -IncludeNotSelected Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 10.0.0.3 10.0.0.21 11 13 True False This one was really hard. At first the story ^^ As mentioned in this thread someone tried to set the "Interfaces" variable, but had no luck. I read the documentation and found this part: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html#INTERFACES I mean what?! Its able to set the networks capability? How do I do that? I searched and searched and by accident I found a samba bug explaining how this thing needs to be formatted: interfaces = "172.31.9.162;if_index=1,capability=RSS,speed=100000" "172.31.9.62;if_index=2,capability=RSS,capability=RDMA" As the documentation mentioned "eth0" in its example I tried different variants like "eth0;capability=RSS", "eth*;capability=RSS", but nothing seemed to work. Then I found this blog post: So I replaced the "eth0" against the IP address (10000000000 bits are 10 gbits so remove one zero if you need it for 1G adapters): interfaces = "192.168.178.9;capability=RSS,speed=10000000000" Then I restarted the Samba server: samba restart And... something strange happened. In Windows, "Get-SmbMultichannelConnection -IncludeNotSelected" returned nothing?! But I reminded the blog post again and tried in Windows the following: Update-SmbMultichannelConnection After that the command returned the active Multichannel-Connection with RSS enabled. And now after all this work, what is the benefit... nothing ^^ No joke. I have a working Multichannel connection, RSS is enabled and nevertheless transfering a file uses only one core I'll try rebooting server and client. Maybe this helps... ... yes, the Client needed a reboot. Now it works
  6. I'm using Unraid for a while now and collected some experience to boost the SMB transfer speeds: Donate? 🤗 1.) Choose the right CPU The most important part is to understand that SMB is single-threaded. This means SMB uses only one CPU core to transfer a file. This is valid for the server and the client. Usually this is not a problem as SMB does not fully utilize a CPU core (except of real low powered CPUs). But Unraid adds, because of the ability to split shares across multiple disks, an additional process called SHFS and its load raises proportional to the transfer speed, which could overload your CPU core. So the most important part is, to choose the right CPU. At the moment I'm using an i3-8100 which has 4 cores and 2257 single thread passmark points: And since I have this single thread power I'm able to use the full bandwith of my 10G network adapter which was not possible with my previous Intel Atom C3758 (857 points) although both have comparable total performance. I even was not able to reach 1G speeds while a parallel Windows Backup was running (see next section to bypass this limitation). Now I'm able to transfer thousands of small files and parallely transfer a huge file with 250 MB/s. With this experience I suggest a CPU that has around 1400 single thread passmark points to fully utilize a 1G ethernet port. As an example: The smallest CPU I would suggest for Unraid is an Intel Pentium Silver J5040. P.S. Passmark has a list sorted by single thread performance for desktop CPUs and server CPUs. 2.) Bypass single-thread limitation The single-thread limitation of SMB and SHFS can be bypassed through opening multiple connections to your server. This means connecting to "different" servers. The easiest way to accomplish that, is to use the ip-address of your server as a "second" server while using the same user login: \\tower\sharename -> best option for user access through file explorer as it is automatically displayed \\10.0.0.2\sharename -> best option for backup softwares, you could map it as a network drive If you need more connections, you can add multiple entries to your windows hosts file (Win+R and execute "notepad c:\windows\system32\drivers\etc\hosts"): 10.0.0.2 tower2 10.0.0.2 tower3 Results If you now download a file from your Unraid server through \\10.0.0.2 while a backup is running on \\tower, it will reach the maximum speed while a download from \\tower is massively throttled: 3.) Bypass Unraid's SHFS process If you enable access directly to the cache disk and upload a file to //tower/cache, this will bypass the SHFS process. Beware: Do not move/copy files between the cache disk and shares as this could cause data loss! The eligible user account will be able to see all cached files, even those from other users. Temporary Solution or "For Admins only" As Admin or for a short test you could enable "disk shares" under Settings -> Global Share Settings: By that all users can access all array and cache disks as SMB shares. As you don't want that, your first step is to click on each Disk in the WebGUI > Shares and forbid user access, except for the cache disk, which gets read/write access only for your "admin" account. Beware: Do not create folders in the root of the cache disk as this will create new SMB Shares Safer Permanent Solution Use this explanation. Results In this thread you can see the huge difference between copying to a cached share or copying directly to the cache disk. 4.) Enable SMB Multichannel + RSS SMB Multichannel is a feature of SMB3 that allows splitting file transfers across multiple NICs (Multichannel) and create multiple TCP connection depending on the amount of CPU Cores (RSS) since Windows 8. This will raise your throughput depending on your amount of NICs, NIC bandwidth, CPU and used settings: This feature is experimental SMB Multichannel is considered experimental since its release with Samba 4.4. The main bug for this state is resolved in Samba 4.13. The Samba developers plan to resolve all bugs with 4.14. Unraid 6.8.3 contains Samba 4.11. This means you use Multichannel on your own risk! Multichannel for Multiple NICs Lets say your mainboard has four 1G NICs and your Client has a 2.5G NIC. Without Multichannel the transfer speed is limited to 1G (117,5 MByte/s). But if you enable Multichannel it will split the file transfer across the four 1G NICs boosting your transfer speed to 2.5G (294 MByte/s): Additionally it uses multiple CPU Cores which is useful to avoid overloading smaller CPUs. To enable Multichannel you need to open the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf And add the following to it: server multi channel support = yes Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Then restart the Samba service with this command: samba restart Eventually you need to reboot your Windows Client, but finally its enabled and should work. Multichannel + RSS for Single and Multiple NICs But what happens if you're server has only one NIC. Now Multichannel is not able to split something, but it has a sub-feature called RSS which is able to split file transfers across multiple TCP connections with a single NIC: Of course this feature works with multiple NICs, too: But this requires RSS capability on both sides. You need to check your servers NIC by opening the Unraid Webterminal and entering this command (could be obsolete with Samba 4.13 as they built-in an RSS autodetection ) egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 Now you can check your Windows 8 / Windows 10 client by opening Powershell as Admin and enter this command: Get-SmbClientNetworkInterface It must return "True" for "RSS Capable": Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name --------------- ----------- ------------ ----- ----------- ------------- 11 True False 10 Gbps {10.0.0.10} Ethernet 3 Now, after you are sure that RSS is supported on your server, you can enable Multichannel + RSS by opening the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf Add the following and change 10.10.10.10 to your Unraid servers IP and speed to "10000000000" for 10G adapter or to "1000000000" for a 1G adapter: server multi channel support = yes interfaces = "10.10.10.10;capability=RSS,speed=10000000000" If you are using multiple NICs the syntax looks like this (add RSS capability only for supporting NICs!): interfaces = "10.10.10.10;capability=RSS,speed=10000000000" "10.10.10.11;capability=RSS,speed=10000000000" Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Now restart the SMB service: samba restart Does it work? After rebooting your Windows Client (seems to be a must), download a file from your server (so connection is established) and now you can check if Multichannel + RSS works by opening Windows Powershell as Admin and enter this command: Get-SmbMultichannelConnection -IncludeNotSelected It must return a line similar to this (a returned line = Multichannel works) and if you want to benefit from RSS then "Client RSS Cabable" must be "True": Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 10.10.10.100 10.10.10.10 11 13 True False In Linux you can verify RSS through this command which returns one open TCP connection per CPU core (in this case we see 4 connections as my client has only 4 CPU cores, altough my server has 6): netstat -tnp | grep smb tcp 0 0 192.168.178.8:445 192.168.178.88:55975 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55977 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55976 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55974 ESTABLISHED 3195/smbd Note: Sadly Samba does not create multiple smbd processes, which means we still need a CPU with high single thread performance to benefit from RSS. This is even mentioned in the presentation: If you are interested in test results, look here. 5.) smb.conf Settings Tuning I did massive testing with a huge amount of smb.conf settings provided by the following websites and really NOTHING resulted in a noticable speed gain: https://wiki.samba.org/index.php/Performance_Tuning https://wiki.samba.org/index.php/Linux_Performance https://wiki.samba.org/index.php/Server-Side_Copy https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/speed.html https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html https://lists.samba.org/archive/samba-technical/attachments/20140519/642160aa/attachment.pdf https://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf https://www.samba.org/samba/docs/current/man-html/ (search for "vfs") https://lists.samba.org/archive/samba/2016-September/202697.html https://codeinsecurity.wordpress.com/2020/05/18/setting-up-smb-multi-channel-between-freenas-or-any-bsd-linux-and-windows-for-20gbps-transfers/ https://www.snia.org/sites/default/files/SDC/2019/presentations/SMB/Metzmacher_Stefan_Samba_Async_VFS_Future.pdf https://www.heise.de/newsticker/meldung/Samba-4-12-beschleunigt-Verschluesselung-und-Datentransfer-4677717.html I would say the recent Samba versions are already optimized by default. 6.) Choose a proper SSD for your cache You could use Unraid without an SSD, but if you want fast SMB transfers an SSD is absolutely required. Else you are limted to slow parity writes and/or through your slow HDD. But many SSDs on the market are not "compatible" for using it as an Unraid SSD Cache. DRAM Many cheap models do not have a DRAM Cache. This small buffer is used to collect very small files or random writes before they are finally written to the SSD and/or is used to have a high speed area for the file mapping-table. In Short, you need DRAM Cache in your SSD. No exception. SLC Cache While DRAM is only absent in cheap SSDs, SLC Cache can miss in different price ranges. Some cheap models use a small SLC cache to "fake" their technical data. Some mid-range models use a big SLC Cache to raise durability and speed if installed in a client pc. And some high-end models do not have an SLC Cache, as their flash cells are fast enough without it. Finally you are not interested in SLC Cache. You are only interested in continuous write speeds (see "Verify Continuous Writing Speed") Determine the Required Writing Speed But before you are able to select the right SSD model you need to determine your minimum required transfer speed. This should be simple. How many ethernet ports do you want to use or do you plan to install a faster network adapter? Lets say you have two 5G ports. With SMB Multichannel its possible to use them in sum and as you plan to install a 10G card in your client you could use 10G in total. Now we can calculate: 10G * 117.5 MByte/s (real throughput per 1G ethernet) = 1175 MByte/s and by that we have two options: buy one M.2 NVMe (assuming your motherboard has such a slot) with a minimum writing speed of 1175 MByte/s buy two or more SATA SSDs and use them in a RAID0, each with a minimum writing speed of 550 MByte/s Verify Continuous Writing Speed of the SSD As an existing "SLC Cache" hides the real transfer speed you need to invest some time to check if your desired SSD model has an SLC cache and how much the SSD throttles after its full. A solution could be to search for "review slc cache" in combination with the model name. Using the image search could be helpful as well (maybe you see a graph with a falling line). If you do not find anything, use Youtube. Many people out there test their new ssd by simply copying a huge amount of files on it. Note: CrystalDiskMark, AS SSD, etc Benchmarks are useless as they only test a really small amount of data (which fits into the fast cache). Durability You could look for the "TBW" value of the SSD, but finally you won't be able to kill the SSD inside the warranty as long your very first filling of your unraid server is done without the SSD Cache. As an example a 1TB Samsung 970 EVO has a TBW of 600 and if your server has a total size of 100TB you would waste 100TBW on your first fill for nothing. If you plan to use Plex, think about using the RAM as your transcoding storage which would save a huge amount of writes to your SSD. Conclusion: Optimize your writings instead of buying an expensive SSD. NAS SSD Do not buy "special" NAS SSDs. They do not offer any benefits compared to the high-end consumer models, but cost more. 7.) More RAM More RAM means more caching and as RAM is even faster than the fastest SSDs, this adds additional boost to your SMB transfers. I recommend installing two identical (or more depening on the amount of slots) RAM modules to benefit from "Dual Channel" speeds. RAM frequency is not as important as RAM size. Read Cache for Downloads If you download a file twice, the second download does not read the file from your disk, instead it uses your RAM only. The same happens if you're loading covers of your MP3s or Movies or if Windows is generating thumbnails of your photo collection. More RAM means more files in your cache. The read cache uses by default 100% of your free RAM. Write Cache for Uploads Linux uses by default 20% of your free RAM to cache writes, before they are written to the disk. You can use the Tips and Tweaks Plugin to change this value or add this to your Go file (with the Config Editor Plugin) sysctl vm.dirty_ratio=20 But before changing this value, you need to be sure to understand the consequences: Never use your NAS without an UPS if you use write caching as this could cause huge data loss! The bigger the write cache, the smaller the read cache (so using 100% of your RAM as write cache is not a good idea!) If you upload files to your server, they are 30 seconds later written to your disk (vm.dirty_expire_centisecs) Without SSD Cache: If your upload size is generally higher than your write cache size, it starts to cleanup the cache and in parallel write the transfer to your HDD(s) which could result in slow SMB transfers. Either you raise your cache size, so its never filled up, or you consider totally disabling the write cache. With SSD Cache: SSDs love parallel transfers (read #6 of this Guide), so a huge writing cache or even full cache is not a problem. But which dirty_ratio value should you set? This is something you need to determine by yourself as its completely individual: At first you need to think about the highest RAM usage that is possible. Like active VMs, Ramdisks, Docker containers, etc. By that you get the smallest amount of free RAM of your server: Total RAM size - Reserved RAM through VMs - Used RAM through Docker Containers - Ramdisks = Free RAM Now the harder part: Determine how much RAM is needed for your read cache. Do not forget that VMs, Docker Containers, Processes etc load files from disks and they are all cached as well. I thought about this and came to this command that counts hot files: find /mnt/cache -type f -amin -86400 ! -size +1G -exec du -bc {} + | grep total$ | cut -f1 | awk '{ total += $1 }; END { print total }' | numfmt --to=iec-i --suffix=B It counts the size of all files on your SSD cache that are accessed in the last 24 hours (86400 seconds) The maximum file size is 1GiB to exclude VM images, docker containers, etc This works only if you hopefully use your cache for your hot shares like appdata, system, etc Of course you could repeat this command on several days to check how it fluctuates. This command must be executed after the mover has finished its work This command isn't perfect as it does not count hot files inside a VM image Now we can calculate: 100 / Total RAM x (Free RAM - Command Result) = vm.dirty_ratio If your calculated "vm.dirty_ratio" is lower than 5% (or even negative), you should lower it to 5 and buy more RAM. between 5% and 20%, set it accordingly, but you should consider buying more RAM. between 20% and 90%, set it accordingly If your calculated "vm.dirty_ratio" is higher than 90%, you are probably not using your SSD cache for hot shares (as you should) or your RAM is huge as hell (congratulation ^^). I suggest not to set a value higher than 90. Of course you need to recalcuate this value if you add more VMs or Docker Containers. #8 Disable haveged Unraid does not trust the randomness of linux and uses haveged instead. By that all encryptions processes on the server use haveged which produces extra load. If you don't need it, disable it through your Go file (CA Config Editor) as follows: # ------------------------------------------------- # disable haveged as we trust /dev/random # https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452 # ------------------------------------------------- /etc/rc.d/rc.haveged stop
  7. I have the same question. I'm not able to get RSS enabled between my Unraid server and my Windows 10 client which is needed to use Multichannel = Multicore = maximum 10G performance: I tested many different settings as you did (interfaces, speed, etc). But nothing helps. I used this documentation to check if my 10G network adapter of the Unraid server is properly setup and it looks fine: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 ethtool -x eth0 RX flow hash indirection table for eth0 with 4 RX ring(s): 0: 0 1 2 3 0 1 2 3 8: 0 1 2 3 0 1 2 3 16: 0 1 2 3 0 1 2 3 24: 0 1 2 3 0 1 2 3 32: 0 1 2 3 0 1 2 3 40: 0 1 2 3 0 1 2 3 48: 0 1 2 3 0 1 2 3 56: 0 1 2 3 0 1 2 3 RSS hash key: 1e:ad:71:87:65:fc:26:7d:0d:45:67:74:cd:06:1a:18:b6:c1:f0:c7:bb:18:be:f8:19:13:4b:a9:d0:3e:fe:70:25:03:ab:50:6a:8b:82:0c RSS hash function: toeplitz: on xor: off crc32: off Finally I tried to change the hash algorithm of the network adapter to XOR as well, but it fails because of this bug. Which really bothers me is, that its not possible to check through Windows 10 why the connection does not use RSS. More settings we could test: https://lists.samba.org/archive/samba/2016-September/202697.html
  8. DevSleep is a feature used in Enterprise Storages to save even more energy (5mW instead of 0.5 - 2W, depending of the HDD model) if the disk spins down. Does anyone know about an adapter that creates a 3.3v signal after the disk goes sleeping? Like "if 12v is less then 2W then send 3.3v signal, else nothing". For me it sounds like a simple relay. But spinning up the disk would be a problem. ^^
  9. I wanted the opposite (spindown) and had the same question (which id has parity2). This is the command that returns all ids for all disks: mdcmd status
  10. @ptr727 (and all other users) I have a theory regarding this problem. I guess its the same thing that causes different transfers speeds for different unraid users with 10G connections. Unraid uses a "layer" for all user shares with the process "shfs". This process is single-threaded and by that limited through the performance of a single cpu core/thread. You are using the E3-1270 v3 and it reaches 7131 passmark points. As your CPU uses hyperthreading I'm not sure if the shfs process is able to use the maximum of a single core or splits the load. If its using 8 different thread, one has only ≈891 passmark points. Since a few days I'm using the i3-8100 and with its 6152 passmark points its weaker than yours, but as it has only 4 cores/threads its core performance is guaranteed at ≈1538 points. Since I have this CPU I'm able to max out 10G and have much better SMB performance. I would be nice if we could test my theory. At first you need to create many random files on your servers cache (replace the two "Music" with one of your share names): # create random files on unraid server mkdir /mnt/cache/Music/randomfiles for n in {1..10000}; do dd status=none if=/dev/urandom of=/mnt/cache/Music/randomfiles/$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 )) done Then you create a single 10GB file on the cache (again replace "Music"): dd if=/dev/urandom iflag=fullblock of=/mnt/cache/Music/20GB.bin bs=1GiB count=20 Now you use your Windows Client to download the file "20GB.bin" through the Windows Explorer. While the download is running you open in Windows the command line (cmd) and execute the following (replace "tower" against your servers name and "Music" against your sharename): start robocopy \\tower\Music\randomfiles\ C:\randomfiles\ /E /ZB /R:10 /W:5 /TBD /IS /IT /NP /MT:128 This happened as long I had the Atom C3758 (≈584 core passmark points😞 And this with my new i3-8100: Finally you could retry this test after you disabled Hyperthreading in the BIOS. By that we could be sure how relevant it is or not.
  11. I have problems with my SMB multichannel setup and found out that windows supports the toeplitz hash function that needs a key length of 40 Bytes and some websites claim that XOR is supported as well. So I checked the RSS configuration of my network adapter: ethtool -x eth0 RX flow hash indirection table for eth0 with 4 RX ring(s): 0: 0 1 2 3 0 1 2 3 8: 0 1 2 3 0 1 2 3 16: 0 1 2 3 0 1 2 3 24: 0 1 2 3 0 1 2 3 32: 0 1 2 3 0 1 2 3 40: 0 1 2 3 0 1 2 3 48: 0 1 2 3 0 1 2 3 56: 0 1 2 3 0 1 2 3 RSS hash key: 1e:ad:71:87:65:fc:26:7d:0d:45:67:74:cd:06:1a:18:b6:c1:f0:c7:bb:18:be:f8:19:13:4b:a9:d0:3e:fe:70:25:03:ab:50:6a:8b:82:0c RSS hash function: toeplitz: on xor: off crc32: off As you can see "toeplitz" is active and the key has a length of 40 bytes (hexadecimal). Should be correct. But I wanted to test if my problems will be solved if I enable XOR, but it fails: # ethtool -X eth0 hfunc xor Cannot set RX flow hash configuration: Operation not supported Then I tried to overwrite the key (with the same value), but this fails, too: # ethtool -X eth0 hkey 1e:ad:71:87:65:fc:26:7d:0d:45:67:74:cd:06:1a:18:b6:c1:f0:c7:bb:18:be:f8:19:13:4b:a9:d0:3e:fe:70:25:03:ab:50:6a:8b:82:0c Cannot set RX flow hash configuration: Operation not supported Finally I tried to allow XOR on an onboard NIC: ethtool -X eth1 hfunc xor Killed After that http://tower/Settings/NetworkSettings isn't loaded anymore and the CPU has maximum load: Then I tried to reboot the server, but it seems to be frozen as I'm still able to access through SMB (?) while the webclient is dead, external ssh is dead (putty) and the terminal hangs at "Starting diagnostics collection... After pressing the powerbutton the shutdown process repeated. But finally hanged again on "Starting diagnostics collection..." so I needed to hold the powerbutton to force shutdown. I'm not sure if this is a bug related to Unraid, Slackware or the network adapter, but it seems to be a bug as all search engine results are related to bugs.
  12. - Deleting a folder with thousands of files (Windows Explorer needs ages) - moving files between two folders that are in different shares (Windows Explorer would down- and upload them) - creating a zip (ages) - create, delete, move, etc in a folder that has no user access - etc
  13. I'm using the official image and I like to know why Plex is writing constantly to the docker.img (its the only active container, so it must be plex): Logging is already disabled. I used the console of the container and this command to find the changed files: find / -mmin -10 -ls > /config/recent_modified_files5.txt And it contains... sorry... need to check later. Someone started a movie while writing this post ^^
  14. Since SMB3.0 this is not valid anymore. But its need to enable "server multi channel support = yes" in the smb.conf. This is something I will test regarding my other problem with multiple smb connections / sessions. But finally this is not as important as the shfs process as the smb process is not as cpu hungry. I used 10g for a long time with my synology nas (Intel Atom C3538 core ≈399 passmark points, no ssd cache) and never enabled multi channel.
  15. I would say yes. It seems "shfs" (the single core bottleneck) is an program written or adapted by the unraid developers. They need to find a way to let this program split it's work to multiple cores. What do you think @limetech @JorgeB @SpencerJ?
  16. Oh no.. found out that my C246 Motherboard does not support SATA Port Multiplier 😭 https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/300-series-chipset-pch-datasheet-vol-1.pdf But the performance of the usb connected drives looks good, so no reason not to use USB (except of the little bit higher energy consumption of 2W per drive):
  17. Wenn deine einzelnen Dateien größer sind als 2TB, dann ja. Nicht wenn sie aus vielen kleinen Dateien bestehen. Nachdem die Datei auf dem Cache gelandet ist und der Mover startet, wird sie ja auf das HDD Array verschoben. Der Platz auf der SSD ist dann logischerweise wieder frei. Der Rest ist dann ein Rechenbeispiel. Wie schnell kannst du die SSD voll machen und wie oft startet der Mover, um sie parallel wieder leer zu machen Ich nutze den Cache sowohl zur Beschleunigung des Transfers als auch um bestimmte Daten permanent auf dem Cache zu lassen. zB meine MP3s, Docker und VMs. Auf die Art sind diese Daten sehr schnell verfügbar und das HDD Array kann noch länger schlafen.
  18. Mit der machst du nichts falsch. Hier ab 4:50 siehst du die durchgehende Schreibrate von 480 MB/s Mit 10 Gbit/s kannst du die je nach CPU Leistung deines NAS auch erreichen. Bei 115 MB/s (1G LAN) reicht es dann übrigens den Mover Interval auf 4 Stunden zu verlängern, weil 2TB / 115 MB/s = 4,8 Std Kopierzeit bis die SSD voll ist. Wobei du auch einen noch längeren Intervall nehmen kannst, wenn es dir egal ist, wenn es nach sehr vielen Daten auf die HDD Array-Geschwindigkeit sinkt. Weil der kopiert dann ja automatisch auf die HDDs, wenn die SSD voll ist. Man sollte nur in den Einstellungen 50-100GB auf dem Cache freilassen. Ich habe 100GB eingestellt, da ich häufiger Filme rippe und wenn parallel große Dateien geschrieben werden, kann es sonst sein, dass der Cache vollläuft und es zum Fehler kommt. Seitdem ich 100GB eingestellt habe, passiert das nicht mehr.
  19. Dann sollte es auch keine Probleme geben. Die Platte ist bei einem sequentiellen Transfer schnell genug, auch wenn es viele kleine Dateien betrifft (trotzdem ist natürlich ein großes ZIP bei so vielen kleinen Dateien besser). Muss also einer der anderen oben genannten Punkte sein oder etwas, was ich selbst noch nicht kenne. SSDs, die angeblich speziell für NAS sind, sind einfach nur teuer. Möchtest du irgendwann 10G verbauen? Dann müsstest du evtl eine NVMe einplanen (oder ein RAID1 aus zwei SATA SSDs). Wenn dir 1G LAN reicht, dann kannst du quasi jede SSD mit "3D NAND" aussuchen. Also zB WD Blue, SanDisk Ultra, Samsung Evo/Pro, Crucial MX500 usw. Vorzugsweise was mit 1TB oder mehr, da größere SSDs bekanntlich einen größeren SLC Cache besitzen. Die Samsung QVO, Crucial BX/MX100, Sandisk Plus, usw solltest du meiden.
  20. Could you add intel gpu tools? With intel_gpu_top its possible to check the iGPU load.
  21. By this guide Plex uses your RAM while transcoding which prevents wearing out your SSD. Edit the Plex Container and enable the "Advanced View": Add this to "Extra Parameters" and hit "Apply": --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 Result: Side note: If you dislike permanent writes to your SSD add " --no-healthcheck ", too. Now open Plex -> Settings -> Transcoder and change the path to "/tmp": If you like to verify it's working, you can open the Plex containers Console: Now enter this command while a transcoding is running: df -h Transcoding to RAM-Disk works if "Use%" of /tmp is not "0%": Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 193M 3.7G 5% /tmp After some time it fills up to nearly 100%: tmpfs 3.8G 3.7G 164M 97% /tmp And then Plex purges the folder automatically: tmpfs 3.8G 1.3G 3.5G 33% /tmp If you stop the movie Plex will delete everything: tmpfs 3.8G 3.8G 0 0% /tmp By this method Plex never uses more than 4GB RAM, which is important, as fully utilizing your RAM can cause an unexpected server behaviour.
  22. Yesterday, I was lazy and set only "/dev/shm/plextranscode" as my transcoding path. I checked the path and it was created: ls /dev/shm plextranscode/ But it stays empty while transcoding! I saw in the Main overview of the Unraid WebGUI that it still produces writes on the NVMe where the docker image is located. With this useful command I was able to verify it: inotifywait -mr /mnt/cache While this returned nothing: inotifywait -mr /dev/shm So the writes are not leaving the docker image. But why? The container's config looks correct: I opened the plex container's console and tried to find the path and yes its writing to a complete different path: # ls -l /tmp/Transcode/Sessions total 0 drwxr-xr-x 1 plex users 62522 Sep 19 12:49 plex-transcode-b7dnev7r0gdgfjq8267pwoxu-136bae98-3ca4-4cbc-ad26-3656b6830885 # ls -l /tmp/Transcode/Sessions total 0 drwxr-xr-x 1 plex users 62890 Sep 19 12:50 plex-transcode-b7dnev7r0gdgfjq8267pwoxu-136bae98-3ca4-4cbc-ad26-3656b6830885 # I verified if the container is able to write a file into /transcode # echo "test" > /transcode/test.txt # cat /transcode/test.txt test # ls -l /transcode/test.txt -rw-r--r-- 1 root root 5 Sep 19 12:51 /transcode/test.txt Re-checked if it exists in dev/shm: ls /dev/shm/plextranscode test.txt Strange. Re-checked plex transcoding path and its correct: Double-checked the Preferences.xml content: Why is Plex writing to a different folder... hmmm. Maybe a chmod thing? Let's check the auth: ls -l ... drwxrwxrwt 1 root root 132 Sep 19 04:14 tmp drwxr-xr-x 2 root root 60 Sep 19 12:51 transcode I chmod them to 777: chmod -R 777 /dev/shm/plextranscode root@Thoth:~# ls -l /dev/shm total 0 drwxrwxrwx 2 root root 60 Sep 19 12:51 plextranscode/ Check if its inside the docker container active and it looks good: ls -l drwxrwxrwt 1 root root 132 Sep 19 04:14 tmp drwxrwxrwx 2 root root 60 Sep 19 12:51 transcode ... # ls -l /transcode total 4 -rwxrwxrwx 1 root root 5 Sep 19 12:51 test.txt So I restart the plex container and started a different movie... aahh looks nice: inotifywait -mr /dev/shm /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS init-stream1.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS chunk-stream1-00080.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE init-stream1.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE chunk-stream1-00080.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ OPEN init-stream0.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ OPEN chunk-stream0-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS init-stream0.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS chunk-stream0-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE init-stream0.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE chunk-stream0-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ OPEN init-stream1.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ OPEN chunk-stream1-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS init-stream1.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ ACCESS chunk-stream1-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE init-stream1.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CLOSE_NOWRITE,CLOSE chunk-stream1-00081.m4s /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ CREATE chunk-stream1-00143.m4s.tmp /dev/shm/plextranscode/Transcode/Sessions/plex-transcode-08xmsae9kwytrf1u91lnijt0-d3df683f-e543-49bc-8f22-7ff29640ead4/ OPEN chunk-stream1-00143.m4s.tmp Conclusion: If you use a different path than /tmp (which is already chmod 777) f.e. a subfolder inside of /tmp you need to set chmod 777 to it, else it won't work!
  23. Das ist egal, wenn der Mover einen schnelleren Intervall besitzt, als du neue Dateien drauf schiebst. Das ist eine besagte Billig-SSD (sorry ^^). Ohne DRAM und ohne SLC Cache werden die Zellen gerade als Cache schnell verbraucht (TLC hat ca 1000 Schreibzyklen, dann ist die SSD kaputt). Für größere Datenmengen ist sie als Cache quasi ungeeignet, da sie mit 100 MB/s nicht mal 1G auslasten kann. Außerdem hat diese SSD sogar einen Bug, der aber mit einem Firmware Update behoben werden kann. Jede SSD muss regelmäßig mit dem Trim Kommando bereinigt werden. In Windows passiert das automatisch. In Unraid benötigt man dazu ein Plugin und muss dieses mit einem entsprechenden Intervall versehen (zB täglich). Das Plugin findest du unter "Apps", sofern du die Community Applications installiert hast (jedem Unraid Nutzer empfohlen). Also direkt auf die HDD. Das ist natürlich gut für die Lebenszeit der SSD, aber widerspricht natürlich dem Sinn eines Caches (die Langsamkeit einer HDD zu umgehen). Was für eine HDD ist das genau? Beim ersten Befüllen des NAS habe ich übrigens auch den Cache deaktiviert. Einfach um die SSD zu schonen. Aber danach ist sie nun überall als Cache aktiv, damit ich eben die höchste Geschwindigkeit nutzen kann.
×
×
  • Create New...