Tucubanito07 Posted January 1, 2020 Share Posted January 1, 2020 Good afternoon guys and happy New Years, I have done some research and have tried messing around with the MTU number and i still can get my 10GB link to stay at least at 400MB. I have my main PC connected straight to the server over a 10GB card on both sides. Also, i have an SSD on my computer where i transcode movies and then after is done i send it to the unraid server. The share Media is on "Use Cache" to yes. My two SSD's are on raid 1 under windows. The link eth3 i have it on the 192.168.3.5 and on my Transcode main PC is 192.168.3.6. They are connected directly to each other. Can someone please help me out on this issue. It will start at 500MBs and then die down to 30MBs and flunctuate thw whole time. It will also go to 150MBs and then back down to 40 or 30MBs. I also have a share called SSD Cache and only used the Cache and the same thing happens. Thanks in advance. Quote Link to comment
NewDisplayName Posted January 1, 2020 Share Posted January 1, 2020 (edited) I have had the same problem. I fixxed it by adding another drive (m.2), they are very cheap and very fast. Ive added it via UD Plugin, works like a charm! My guess is that the speed drops because of cache/array/what ever, the typical unraid problem... unraid isnt build for "max speed". By mounting a extra ssd, you get 100% full speed. Edited January 1, 2020 by nuhll Quote Link to comment
Gasaraki Posted January 2, 2020 Share Posted January 2, 2020 (edited) What type of SSDs are those? A lot of the TLC or QLC SSDs have a SLC cache size that enables 400+MB/s transfers but once you fill up the cache with a big file, speed drops to around 40MB/s. Also realize that I am not talking about the UNRAID cache. I'm talking about the built in cache inside each SSD. Edited January 2, 2020 by Gasaraki Quote Link to comment
Tucubanito07 Posted January 2, 2020 Author Share Posted January 2, 2020 49 minutes ago, Gasaraki said: What type of SSDs are those? A lot of the TLC or QLC SSDs have a SLC cache size that enables 400+MB/s transfers but once you fill up the cache with a big file, speed drops to around 40MB/s. Also realize that I am not talking about the UNRAID cache. I'm talking about the built in cache inside each SSD. I have an ScanDisk SSD Plus 1TB Internal. I believe is SLC. Quote Link to comment
Michael_P Posted January 2, 2020 Share Posted January 2, 2020 Those drives are known for terrible write performance Quote Link to comment
Tucubanito07 Posted January 3, 2020 Author Share Posted January 3, 2020 4 hours ago, Michael_P said: Those drives are known for terrible write performance So technically the way I have it setup is good, is just the drive is not good at all for caching. Quote Link to comment
JorgeB Posted January 3, 2020 Share Posted January 3, 2020 6 hours ago, Tucubanito07 said: So technically the way I have it setup is good, is just the drive is not good at all for caching. Most likely, if the initial speed is 500MB/s (while it's caching to RAM) it means the network is writing well enough, though not great, since initial speeds closer to 1GB/s would be better, but main problem appears to be the cache device can't keep up, and very few SATA SSDs can sustain 300/400MB/s, Samsung 850/860EVO and Crucial MX500 can usually do around 300MB/s, for the larger models, 500GB and above. Quote Link to comment
Michael_P Posted January 3, 2020 Share Posted January 3, 2020 11 hours ago, Tucubanito07 said: So technically the way I have it setup is good, is just the drive is not good at all for caching. You can test the link itself with iperf Quote Link to comment
Tucubanito07 Posted January 3, 2020 Author Share Posted January 3, 2020 7 hours ago, Michael_P said: You can test the link itself with iperf I don't know who to do this. I will search online how to test this. I do have it installed from last time i tried to figure it out and couldn't. Thank you. Quote Link to comment
Tucubanito07 Posted January 3, 2020 Author Share Posted January 3, 2020 12 hours ago, johnnie.black said: Most likely, if the initial speed is 500MB/s (while it's caching to RAM) it means the network is writing well enough, though not great, since initial speeds closer to 1GB/s would be better, but main problem appears to be the cache device can't keep up, and very few SATA SSDs can sustain 300/400MB/s, Samsung 850/860EVO and Crucial MX500 can usually do around 300MB/s, for the larger models, 500GB and above. Thank you. Quote Link to comment
NewDisplayName Posted January 3, 2020 Share Posted January 3, 2020 (edited) https://iperf.fr/iperf-download.php https://www.linode.com/docs/networking/diagnostics/install-iperf-to-diagnose-network-speed-in-linux/ iperf tests are only in RAM so its SSD/HDD unrelated, it just tests the link speed. But i think like johnnie on my Samsung 850 Pro (if i recall correct) i also only getting 300mbs for some time (or GB). If you have a chance to get your hands on a nvme drive, this will greatly improve your performance. They can easily read and write 2000MB/s Corsair Force MP510 960 GB NVMe PCIe Gen3 x4 M.2-SSD (bis zu 3,480 MB/s) is what i use currently for my 10gbit uplink Edited January 3, 2020 by nuhll Quote Link to comment
Tucubanito07 Posted January 3, 2020 Author Share Posted January 3, 2020 3 minutes ago, nuhll said: https://iperf.fr/iperf-download.php https://www.linode.com/docs/networking/diagnostics/install-iperf-to-diagnose-network-speed-in-linux/ iperf tests are only in RAM so its SSD/HDD unrelated, it just tests the link speed. I understand that. I have nerdpack install and i have the iperf tool in on position but when i use the terminal in unraid it says is not install. Quote Link to comment
NewDisplayName Posted January 3, 2020 Share Posted January 3, 2020 (edited) 3 minutes ago, Tucubanito07 said: I understand that. I have nerdpack install and i have the iperf tool in on position but when i use the terminal in unraid it says is not install. Just to be sure, after you clicked "on", did you click apply? edit: seems to be working fine for me root@Unraid-Server:~# iperf3 iperf3: parameter error - must either be a client (-c) or server (-s) Usage: iperf [-s|-c host] [options] iperf [-h|--help] [-v|--version] Server or Client: -p, --port # server port to listen on/connect to -f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -F, --file name xmit/recv the specified file -A, --affinity n/n,m set CPU affinity -B, --bind <host> bind to a specific interface -V, --verbose more detailed output -J, --json output in JSON format --logfile f send output to a log file --forceflush force flushing output at every interval -d, --debug emit debugging output -v, --version show version information and quit -h, --help show this message and quit Server specific: -s, --server run in server mode -D, --daemon run the server as a daemon -I, --pidfile file write PID file -1, --one-off handle one client connection then exit Client specific: -c, --client <host> run in client mode, connecting to <host> -u, --udp use UDP rather than TCP -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode) --fq-rate #[KMG] enable fair-queuing based socket pacing in bits/sec (Linux only) -t, --time # time in seconds to transmit for (default 10 secs) -n, --bytes #[KMG] number of bytes to transmit (instead of -t) -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) -l, --len #[KMG] length of buffer to read or write (default 128 KB for TCP, dynamic or 1 for UDP) --cport <port> bind to a specific client port (TCP and UDP, default: ephemeral port) -P, --parallel # number of parallel client streams to run -R, --reverse run in reverse mode (server sends, client receives) -w, --window #[KMG] set window size / socket buffer size -C, --congestion <algo> set TCP congestion control algorithm (Linux and FreeBSD only) -M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes) -N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm -4, --version4 only use IPv4 -6, --version6 only use IPv6 -S, --tos N set the IP 'type of service' -L, --flowlabel N set the IPv6 flow label (only supported on Linux) -Z, --zerocopy use a 'zero copy' method of sending data -O, --omit N omit the first n seconds -T, --title str prefix every output line with this string --get-server-output get results from server --udp-counters-64bit use 64-bit counters in UDP test packets [KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga- iperf3 homepage at: http://software.es.net/iperf/ Report bugs to: https://github.com/esnet/iperf Edited January 3, 2020 by nuhll Quote Link to comment
Tucubanito07 Posted January 3, 2020 Author Share Posted January 3, 2020 4 minutes ago, nuhll said: Just to be sure, after you clicked "on", did you click apply? edit: seems to be working fine for me i was trying with iperf and not iperf3 thats why. You are a genius thank you. Quote Link to comment
NewDisplayName Posted January 3, 2020 Share Posted January 3, 2020 Just now, Tucubanito07 said: i was trying with iperf and not iperf3 thats why. You are a genius thank you. Its always the small things! np! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.