(SOLVED)10GB Slow transfer speeds


Recommended Posts

Hello all i recently purchased 2 Dell Broadcom 57810 Dual Port 10Gb PCIe SFP+ for faster transfer speeds between my Windows 10 PC and my unRAID server running 6.8.2.

I watched multiple guides and have googled an epic ton and believe i have setup everything properly but i am only able to get a speed of 400+ MB/s capping at 500. I have setup a ramdisk on the windows pc and tried transferring between a Unassigned Devices SSD, and have tried transferring between an SSD on the windows PC and an SSD on the unRAID server. i have also setup iperf3 and run the necessary test's and am only getting about maybe 3.89 to 4 GB bandwith speed. I have tweaked the settings of the MTU/jumbo frames and it absolutely makes no difference, what is it that i am missing ?

Link to comment
13 minutes ago, halfelite said:

Couple things to try how many threads in iperf3? As you dont give much hardware specs are you sure the cards are in a full 8x slot. And not an 8x that shares pci lanes making it a 4x on bandwidth 

For the windows PC the card is in a PCIE x16 slot and the same for the unRAID server the  windows PC is the P8Z77-WS  with 32 gigs of ram  I7 3770k processor. unRAID server is supermicro X9DRI-LN4F+ with 64 GB of ECC ram and 2x INTEL XEON E5-2680V2 10 CORE 2.80GHz

Link to comment
49 minutes ago, johnnie.black said:

Did you do this? What's the iperf max bandwidth with a single transfer?

Im sorry i dont understand im not sure i understand how to run that test i followed the latest spaceinvaderone video tutorial regarding iperf i dont think he did anything in particular to get it working for him. I followed along and definitelydid not get the same result

 

Link to comment
10 hours ago, Sinister said:

400+ MB/s capping at 500

 

10 hours ago, Sinister said:

transferring between an SSD on the windows PC and an SSD on the unRAID server

You're hitting the max speeds of the SSDs. If you need more than 500MB/s you need NVME drives for example on both sides or storage that is able to handle more than that.

Link to comment
8 hours ago, bastl said:

 

You're hitting the max speeds of the SSDs. If you need more than 500MB/s you need NVME drives for example on both sides or storage that is able to handle more than that.

that makes sense which is why spaceinvaderone used 2 ramdisks in his tutorial. but then my question is why do i get these speeds using iperf3 which ignores hard drives. my PC on the left unRAID server on the rightIperf3.thumb.jpg.63887ebf5774a1ae82f66889f7f60512.jpg

Link to comment
10 minutes ago, johnnie.black said:

Iperf tests the actual LAN bandwidth, you're unlikely to get significantly more speed during a single transfer than that result, it's usually the hardware used limiting that, NIC, cable, switch, even board/CPU combo.

at this point im betting heavily on the cards not meeting expectations due to age do you think i should switch

Link to comment

So i have replaced the broadcom cards with Mellanox MCX311A-XCAT CX311A ConnectX-3 EN 10G Ethernet 10GbE SFP+ x2 I have created a ramdisk in windows 10 and also unraid. I have 64GB of ddr3 ecc memory in unraid server 32 GB of ram in windows machine. i could be wrong but wouldnt this test ignore any hardware limitations and yield better results ?

Link to comment
3 hours ago, Sinister said:

this test ignore any hardware limitations and yield better results ?

iperf test shouldn't affect by how many memory.

 

BTW, suggest ref below info. and make some more test, especially NUMA node and RSS setting and confirm no other NIC in same network domain.

 

http://www.darrylvanderpeijl.com/windows-server-2016-networking-optimizing-network-settings/

 

Below are ConnectX-3 in Unraid and Intel X520 in Windows ( jumbo frame disable and both side single CPU system )

 

Connecting to host 192.168.9.181, port 5201
[  4] local 192.168.9.182 port 37938 connected to 192.168.9.181 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   951 MBytes  7.97 Gbits/sec    0    325 KBytes
[  4]   1.00-2.00   sec   951 MBytes  7.98 Gbits/sec    0    279 KBytes
[  4]   2.00-3.00   sec   970 MBytes  8.14 Gbits/sec    0    277 KBytes
[  4]   3.00-4.00   sec   957 MBytes  8.03 Gbits/sec    0    277 KBytes
[  4]   4.00-5.00   sec   974 MBytes  8.17 Gbits/sec    0    277 KBytes
[  4]   5.00-6.00   sec   975 MBytes  8.18 Gbits/sec    0    277 KBytes
[  4]   6.00-7.00   sec   971 MBytes  8.14 Gbits/sec    0    277 KBytes
[  4]   7.00-8.00   sec   977 MBytes  8.19 Gbits/sec    0    277 KBytes
[  4]   8.00-9.00   sec   966 MBytes  8.10 Gbits/sec    0    277 KBytes
[  4]   9.00-10.00  sec   972 MBytes  8.15 Gbits/sec    0    277 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  9.44 GBytes  8.11 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  9.43 GBytes  8.10 Gbits/sec                  receiver
Name                                            : 乙太網路 2
InterfaceDescription                            : Intel(R) Ethernet Server Adapter X520-1
Enabled                                         : True
NumberOfReceiveQueues                           : 2
Profile                                         : NUMAStatic
BaseProcessor: [Group:Number]                   : 0:0
MaxProcessor: [Group:Number]                    : 0:6
MaxProcessors                                   : 4
RssProcessorArray: [Group:Number/NUMA Distance] : 0:0/0  0:2/0  0:4/0  0:6/0
IndirectionTable: [Group:Number]                : 0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2
                                                  0:0   0:2     0:0     0:2     0:0     0:2     0:0     0:2

 

Edited by Benson
Link to comment

I'm also using Mellanox ConnectX-2 and 3, Microtik CRS309-1G-8S+IN switch, usually get around 8 and 9Gb/s:

 

D:\temp\iperf>iperf3 -c 10.0.0.7
Connecting to host 10.0.0.7, port 5201
[  4] local 10.0.0.50 port 52218 connected to 10.0.0.7 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.01 GBytes  8.67 Gbits/sec
[  4]   1.00-2.00   sec   896 MBytes  7.51 Gbits/sec
[  4]   2.00-3.00   sec   964 MBytes  8.09 Gbits/sec
[  4]   3.00-4.00   sec  1017 MBytes  8.53 Gbits/sec
[  4]   4.00-5.00   sec  1012 MBytes  8.49 Gbits/sec
[  4]   5.00-6.00   sec  1.00 GBytes  8.62 Gbits/sec
[  4]   6.00-7.00   sec  1023 MBytes  8.58 Gbits/sec
[  4]   7.00-8.00   sec  1.03 GBytes  8.82 Gbits/sec
[  4]   8.00-9.00   sec   984 MBytes  8.25 Gbits/sec
[  4]   9.00-10.00  sec   983 MBytes  8.24 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  9.76 GBytes  8.38 Gbits/sec                  sender
[  4]   0.00-10.00  sec  9.76 GBytes  8.38 Gbits/sec                  receiver

 

Link to comment

I have tried everything in the posted article i was able to understand and still i was unable to get anywhere near that speed

 

C:\iperf3>iperf3 -c 10.10.20.199
Connecting to host 10.10.20.199, port 5201
[  4] local 10.10.20.196 port 63119 connected to 10.10.20.199 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   426 MBytes  3.57 Gbits/sec
[  4]   1.00-2.00   sec   429 MBytes  3.60 Gbits/sec
[  4]   2.00-3.00   sec   444 MBytes  3.72 Gbits/sec
[  4]   3.00-4.00   sec   433 MBytes  3.63 Gbits/sec
[  4]   4.00-5.00   sec   450 MBytes  3.77 Gbits/sec
[  4]   5.00-6.00   sec   445 MBytes  3.74 Gbits/sec
[  4]   6.00-7.00   sec   454 MBytes  3.80 Gbits/sec
[  4]   7.00-8.00   sec   445 MBytes  3.74 Gbits/sec
[  4]   8.00-9.00   sec   446 MBytes  3.74 Gbits/sec
[  4]   9.00-10.00  sec   443 MBytes  3.72 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  4.31 GBytes  3.70 Gbits/sec                  sender
[  4]   0.00-10.00  sec  4.31 GBytes  3.70 Gbits/sec                  receiver

 

not sure what im doing wrong here when im using iperf to test and frames and every other setting has been checked. Cards have been changed and even transferring from ram disk to ram disk isnt working. id say only 2 times ever when transferring to a unassigned devices SSD did i see it go well over 1.09GB/s

 

according to this post iperf is a ram only test without involving any kind of disk

 

Edited by Sinister
more info
Link to comment

So after some times i have made numerous changes and seem to still be having an issue, i have changed the mellanox card in the server to Mellanox MCX312A-XCBT CX312A ConnectX-3 EN 10GbE SFP+ Dual-Port PCI-E NIC. I have also purchased 2x Samsung 970 EVO Plus SSD 250GB - M.2 NVMe Interface Internal Solid State Drive with V-NAND Technology one in the desktop one in the desktop one in the server as an unassigned device. MTU settings 9014 on both cards. on the windows side both send and receive buffers 4096, interruption moderation disabled, receive side scaling enabled.

Link to comment

Could you make test on single CPU or other platform in both side, even Windows or Unraid.

 

I make some test on Threadripper platform and setting with CPU pinning, the result different was small.

 


[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   866 MBytes  7.27 Gbits/sec    0    399 KBytes       
[  4]   1.00-2.00   sec   856 MBytes  7.18 Gbits/sec    0    365 KBytes       
[  4]   2.00-3.00   sec   881 MBytes  7.39 Gbits/sec    2    405 KBytes       
[  4]   3.00-4.00   sec   880 MBytes  7.38 Gbits/sec    0    376 KBytes       
[  4]   4.00-5.00   sec   876 MBytes  7.35 Gbits/sec    2    422 KBytes       
[  4]   5.00-6.00   sec   872 MBytes  7.32 Gbits/sec    0    362 KBytes       
[  4]   6.00-7.00   sec   874 MBytes  7.33 Gbits/sec    0    405 KBytes       
[  4]   7.00-8.00   sec   871 MBytes  7.31 Gbits/sec    0    385 KBytes       
[  4]   8.00-9.00   sec   870 MBytes  7.30 Gbits/sec    2    436 KBytes       
[  4]   9.00-10.00  sec   878 MBytes  7.36 Gbits/sec    4    368 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  8.52 GBytes  7.32 Gbits/sec   10             sender
[  4]   0.00-10.00  sec  8.52 GBytes  7.32 Gbits/sec                  receiver

But I notice some problem happen ( not relate pinning ), you can found some "retry" between test and this also indicate on switch side. Anyway I need fix it later.

 

What iperf3 version you are running ? I notice you haven't "Retr" column.

 

image.png.631ca661cf51564fc13c0761f4df4655.png

 

 

Edited by Benson
Link to comment

I boot Threadripper into baremetel Windows10, the iperf3 result really poor and unstable ( no any hardware change ), some help by NIC parameter tuning.

 

Quote

Connecting to host 192.168.9.181, port 5201
[  4] local 192.168.11.188 port 50231 connected to 192.168.9.181 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   227 MBytes  1.90 Gbits/sec
[  4]   1.00-2.00   sec   219 MBytes  1.84 Gbits/sec
[  4]   2.00-3.00   sec   231 MBytes  1.93 Gbits/sec
[  4]   3.00-4.00   sec   211 MBytes  1.77 Gbits/sec
[  4]   4.00-5.00   sec   215 MBytes  1.81 Gbits/sec
[  4]   5.00-6.00   sec   206 MBytes  1.73 Gbits/sec
[  4]   6.00-7.00   sec   212 MBytes  1.78 Gbits/sec
[  4]   7.00-8.01   sec   213 MBytes  1.78 Gbits/sec
[  4]   8.01-9.00   sec   233 MBytes  1.96 Gbits/sec
[  4]   9.00-10.00  sec   217 MBytes  1.82 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  2.13 GBytes  1.83 Gbits/sec                  sender
[  4]   0.00-10.00  sec  2.13 GBytes  1.83 Gbits/sec                  receiver

 


Connecting to host 192.168.9.181, port 5201
[  4] local 192.168.11.188 port 50233 connected to 192.168.9.181 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   320 MBytes  2.68 Gbits/sec
[  4]   1.00-2.00   sec   364 MBytes  3.05 Gbits/sec
[  4]   2.00-3.00   sec   340 MBytes  2.85 Gbits/sec
[  4]   3.00-4.00   sec   291 MBytes  2.44 Gbits/sec
[  4]   4.00-5.00   sec   318 MBytes  2.67 Gbits/sec
[  4]   5.00-6.00   sec   335 MBytes  2.80 Gbits/sec
[  4]   6.00-7.00   sec   346 MBytes  2.91 Gbits/sec
[  4]   7.00-8.00   sec   363 MBytes  3.04 Gbits/sec
[  4]   8.00-9.00   sec   324 MBytes  2.72 Gbits/sec
[  4]   9.00-10.00  sec   360 MBytes  3.02 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  3.28 GBytes  2.82 Gbits/sec                  sender
[  4]   0.00-10.00  sec  3.28 GBytes  2.82 Gbits/sec                  receiver

 

Finally, my NIC ( X520) haven't NUMA node assignment support, so I set RSS from default 8 -> 4 -> 1 or interruption moderation disabled then got a good result, but still slightly unstable.

 

Quote

Connecting to host 192.168.9.181, port 5201
[  4] local 192.168.11.188 port 50414 connected to 192.168.9.181 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   958 MBytes  8.04 Gbits/sec
[  4]   1.00-2.00   sec   983 MBytes  8.25 Gbits/sec
[  4]   2.00-3.00   sec   986 MBytes  8.27 Gbits/sec
[  4]   3.00-4.00   sec   987 MBytes  8.28 Gbits/sec
[  4]   4.00-5.00   sec   984 MBytes  8.26 Gbits/sec
[  4]   5.00-6.00   sec   979 MBytes  8.21 Gbits/sec
[  4]   6.00-7.00   sec   983 MBytes  8.24 Gbits/sec
[  4]   7.00-8.00   sec   984 MBytes  8.25 Gbits/sec
[  4]   8.00-9.00   sec   980 MBytes  8.22 Gbits/sec
[  4]   9.00-10.00  sec   983 MBytes  8.25 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  9.58 GBytes  8.23 Gbits/sec                  sender
[  4]   0.00-10.00  sec  9.58 GBytes  8.23 Gbits/sec                  receiver

 


Connecting to host 192.168.9.181, port 5201
[  4] local 192.168.11.188 port 50416 connected to 192.168.9.181 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   874 MBytes  7.33 Gbits/sec
[  4]   1.00-2.00   sec   880 MBytes  7.38 Gbits/sec
[  4]   2.00-3.00   sec   821 MBytes  6.89 Gbits/sec
[  4]   3.00-4.00   sec   813 MBytes  6.82 Gbits/sec
[  4]   4.00-5.00   sec   816 MBytes  6.84 Gbits/sec
[  4]   5.00-6.00   sec   803 MBytes  6.74 Gbits/sec
[  4]   6.00-7.00   sec   816 MBytes  6.85 Gbits/sec
[  4]   7.00-8.00   sec   810 MBytes  6.80 Gbits/sec
[  4]   8.00-9.00   sec   807 MBytes  6.77 Gbits/sec
[  4]   9.00-10.00  sec   805 MBytes  6.75 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  8.05 GBytes  6.92 Gbits/sec                  sender
[  4]   0.00-10.00  sec  8.05 GBytes  6.92 Gbits/sec                  receiver

 

Edited by Benson
Link to comment
2 hours ago, Benson said:

Could you make test on single CPU or other platform in both side, even Windows or Unraid.

 

I make some test on Threadripper platform and setting with CPU pinning, the result different was small.

 



[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   866 MBytes  7.27 Gbits/sec    0    399 KBytes       
[  4]   1.00-2.00   sec   856 MBytes  7.18 Gbits/sec    0    365 KBytes       
[  4]   2.00-3.00   sec   881 MBytes  7.39 Gbits/sec    2    405 KBytes       
[  4]   3.00-4.00   sec   880 MBytes  7.38 Gbits/sec    0    376 KBytes       
[  4]   4.00-5.00   sec   876 MBytes  7.35 Gbits/sec    2    422 KBytes       
[  4]   5.00-6.00   sec   872 MBytes  7.32 Gbits/sec    0    362 KBytes       
[  4]   6.00-7.00   sec   874 MBytes  7.33 Gbits/sec    0    405 KBytes       
[  4]   7.00-8.00   sec   871 MBytes  7.31 Gbits/sec    0    385 KBytes       
[  4]   8.00-9.00   sec   870 MBytes  7.30 Gbits/sec    2    436 KBytes       
[  4]   9.00-10.00  sec   878 MBytes  7.36 Gbits/sec    4    368 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  8.52 GBytes  7.32 Gbits/sec   10             sender
[  4]   0.00-10.00  sec  8.52 GBytes  7.32 Gbits/sec                  receiver

But I notice some problem happen ( not relate pinning ), you can found some "retry" between test and this also indicate on switch side. Anyway I need fix it later.

 

What iperf3 version you are running ? I notice you haven't "Retr" column.

 

image.png.631ca661cf51564fc13c0761f4df4655.png

 

 

I am using iperf version 3.1.3 on bare metal windows 10 pro

Link to comment

So an update. to further try and resolve this i ended up transferring the card into a shuttle PC which is currently running windows 7 and when doing speed tests the cards gave me 1GB going both directions between the server and client. and then thought to myself something must be wrong with my windows 10 install, I made a partition backup and re-installed the OS and tested the speed and speed was great this time between the original machine and server. I began to install programs one by one testing transfer speed immediately afterwards, when i finally got to my antivirus (avast) I found that when installed it slows down my LAN transfers to about 300MB/s and 200 going back to client PC even trying to add the network shares to be excluded from the program entirely it made no difference im not sure whats up with their firewall but i have googled and found multiple posts regarding this. and have not found what the solution could be

  • Like 1
Link to comment
9 minutes ago, johnnie.black said:

Thanks for reporting back, AV software can have a noticeable performance impact in various areas.

But I would not think that it would have a strict firewall for a locally mapped drive and even in the settings no discernable way to resolve it. Even when you specify to.leave specific locations alone if I cant figure it out I'll certainly be moving on to something else that doesn't perform this way, I'm open to any suggestions for a new AV software 

Link to comment
  • Sinister changed the title to (SOLVED)10GB Slow transfer speeds
  • 5 months later...
On 3/29/2020 at 12:42 PM, Sinister said:

But I would not think that it would have a strict firewall for a locally mapped drive and even in the settings no discernable way to resolve it. Even when you specify to.leave specific locations alone if I cant figure it out I'll certainly be moving on to something else that doesn't perform this way, I'm open to any suggestions for a new AV software 

Do you solve that issue? I'm also have 3.5Gbps speeds instead of 10Gbps.

Link to comment
8 hours ago, SuberSeb said:

Do you solve that issue? I'm also have 3.5Gbps speeds instead of 10Gbps.

Its not something i can fix used AVAST as an antivirus and firewall the only way to get around this issue was to remove the firewall component of the software completely. Not even disabling it was good enough 

Link to comment
  • 1 month later...

I am hunting down a similar perplexing situation. I have a powerful (16core Threadripper, 128G of RAM) Windows 10 machine connected to a hi-power (32core Threadripper, 256G of RAM) Ubuntu Linux machine via 10GBE network.

 

A while ago, network speed between both machines became very slow.

 

I booted the Windows box with Ubuntu, and iperf3 speed between the two machines was at 9.41 Gbit/sec.  That ruled out hardware problems.

 

I did a completely new install to Windows 10 Pro, Version 2004, OS build 19041.610 . That brought the iperf3 speed to around 2 Gbit/sec. Booting in Safe mode and shutting down all services that could be shut down did not improve matters until Zonealarm was shut down. Speed now at around 2.4 Gbit. After I completely removed Zonealarm and replaced it with Kaspersky, iperf3 speed was at around 3 Gbit/s, but that was as fast as I could tweak it.

 

HOWEVER, when I booted the machine from an emergency a Windows 10 Pro, Ver 1909 build 18363.1139 install, the speed jumped to 6.14 Gigabit/second.

 

All possible tweaks (jumbo frames, large buffers, etc.) applied with near zero impact. Changed and updated drivers, even swapped 10 GBE NICs on the Windows machine (Asus/Aquantia XG-C100C and Intel X540-T2, both in a x8 slot of the Windows machine).

 

The fact that the same hardware runs at close to wire speed with Ubuntu on both sides, and that Ubuntu to Windows 10 Pro, Ver 1909  runs twice as fast  as Ubuntu to  Windows 10 Pro, Version 2004, tells me that something is wrong with the latest Windows.

 

Reported to Microsoft, did not hear back.

 

P.S.: The Zonealarm impact is perplexing. The 10GBE NICs and network segments were excluded from Zonealarm. Also note the speed difference between TURNING OFF Zonealarm and REMOVING Zonealarm. 

Edited by Bertel
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.