rhard

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by rhard

  1. I had i9-9900 32 Gb with 10Gb ethernet. Same issue with small files (TimeMashine, Apple Photos lib, etc.).
  2. Could it be this has something to do with old SSD write issue? Did you already reformat your NVMs if any? https://unraid.net/blog/unraid-6-9-beta29
  3. Also, "Case-sensitive names: Auto" makes significant performance hit with directories with many files:
  4. Yes, I'm also very unhappy. Last month I started to search for home server solution. Mainly as a backup for photo/video archive, plus some docker and couple VM's. First I wanted to buy QNAP, but changed to fan-less design and better hardware. So I started to search all possible software solutions. UNRAID still remains my favorite. I tried OMV and Proxmox. OpenNAS don't want because of OpenBSD. But SMB performance is bad. I expected to run some projects over 10GbE from NVME. But was very disappointed waiting 20+ hours for my 450GB Apple Photos library transferring to UNRAID. Now I want to try custom linux build. I don't need any kind of parity or RAID. Will just rsync the data from one 10TB drive to another. Probably, I will try LVM cache with NVME to the main HDD, because UNRAID cache is also not what I expected. But I like UNRAID UI and Docker/VM management. Theoretically, I can live with performance issues, but generally speaking, I would like to have more support from the devs. At least some acknowledge.
  5. Sorry, my mistake. Yes, I meant -P 8 option on the client.
  6. iPerf3 -c -D 8 will start 8 threads to fully saturate the network. With one thread I can also get only 6~7 Gb/s. With more threads it gets fully saturated to 1,17 GB/s. I don't know if SMB uses more threads, but at least you can check your network at full speed.
  7. Yes, but he also has a slow transfer of 10GB file to the SAS drive above.
  8. Did you also enable Jumbo frames on your 10GbE? What is iperf3 -D 8 results?
  9. Another thing to test. Try to set Case-sensitive names: Yes on a share. I did yesterday and now my share is only 20-30% slower then unassigned device (beside it is NVME vs old SATA SSD in the USB port). Also, if you have Samsung NVME, in some of the threads here, people found it is not good with BTRFS. So I formatted it to XFS.
  10. Enable share is easy. It is a temporary solution for me now. One problem you cannot use standard share UI and create different share folders with different parameters and only share the disk as a whole. There are different threads on the forum addressing strange problems with SMB/SHFS. Here are some of them: In the last topic at the end I did some tests comparing User Share vs Disk Share. Will do now the tests with unassigned device as well.
  11. I have a problem with small files too. According to my investigations, this is unRAID SHFS problem and not SMB. You can try to create a share on an unassigned SSD drive and compare it to the normal share (even with cache only). In my case, it is much faster. It would be very interesting to hear about your experience. I like unRAID as a media server, but it is almost unusable as network storage for small file projects.
  12. Here is the results comparing User Share vs DiskShare + Direct IO: Write speeds: Have no idea what does it mean...
  13. I just started with unRAID. I am on the latest stable which is 6.8.3. Today I will test SMB on cache disk bypassing unRAID file system.
  14. Did you also trigger "Performance" -> "Power Save" in Tips and Tweaks? Anyway, I just continue to measure performance and looks like SMB works better with ACPI driver. I will stay with disabled pstates as well. At least until kernel 5. Currently, slow SMB write speeds and CPU power issues only major problems stopping me from switching to unRAID.
  15. I have the same problem. Found that intel_pstate driver is very sensible. If you run popular watch command it simply shows wrong results: watch "grep 'cpu MHz' /proc/cpuinfo" cpu MHz : 4742.255 cpu MHz : 4480.885 cpu MHz : 4451.161 cpu MHz : 4420.211 cpu MHz : 4473.812 cpu MHz : 4423.951 cpu MHz : 4432.021 cpu MHz : 4477.743 cpu MHz : 4558.370 cpu MHz : 4413.524 cpu MHz : 4425.724 cpu MHz : 4484.005 cpu MHz : 4425.530 cpu MHz : 4403.425 cpu MHz : 4437.033 cpu MHz : 4382.428 But if you decrease the update interval the results are much better: watch -n 0.3 "grep 'cpu MHz' /proc/cpuinfo" cpu MHz : 1686.816 cpu MHz : 1713.529 cpu MHz : 992.136 cpu MHz : 971.635 cpu MHz : 1724.956 cpu MHz : 4800.423 cpu MHz : 3784.639 cpu MHz : 4902.354 cpu MHz : 967.030 cpu MHz : 1059.654 cpu MHz : 1674.691 cpu MHz : 1738.267 cpu MHz : 2040.228 cpu MHz : 1502.640 cpu MHz : 889.176 cpu MHz : 838.754 I guess, watch command immediately wakes up the CPU showing high frequency.
  16. Hi guys, unfortunately I see the same issue with SMB. Here is my results comparing Performance and On Demand CPU profile. Pstates are disabled in config, while running my CPU at max frequency even in idle. Here is write speeds once again: Currently, this prevents me to switch to unRAID in production. Shouldn't it be the high priority issue? My Specs: 10GbE i9-9900 NVME Cache intel_pstate=disable