[6.5.0]+ Call Traces when assigning IP Address to Docker Containers


89 posts in this topic Last Reply

Recommended Posts

 

2 hours ago, bonienl said:

I never had call traces before, and initially I started with br0, but I have more containers running now, including the "problematic" pi-hole container. 

So far so good (=no traces), but I let it run a couple of days longer, to be more conclusive.

 

That's why I think you wont see call traces now, either.  I have them on br0 regardless of which dockers I assign to that interface. Sure, I saw them more often with Pi-hole (probably due to volume of network traffic), but, I saw it with others as well and no Pi-hole in the picture.

 

Initially, I only had the UniFi docker on br0 and that is when I saw the first call trace.  This was on unRAID 6.4.0, the first to officially support separate IP addresses for dockers/VMs, but, it has occurred with various dockers on br0 in all subsequent versions of unRAID as well.

 

The longest I ever went without a br0 macvlan call trace was ~4 days.

Link to post
  • Replies 88
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

TLDR Version:   Since this thread gets linked often for reference when a forum member experiences macvlan/broadcast call traces, I will provide a summary of the findings and potential soluti

My network settings for eth1 My Docker Settings   Hope this helps.

So I have been running my system since Wednesday of last week and I no longer have any call traces on macvlan.   I may try the next suggestion and place 192.168.19.xxx with a DHCP pool on br

Posted Images

I have not looked at my syslog for a while and after further inspection, I am also getting the macvlan broadcast call traces.

 

I am currently running 6.5.1-rc2.

 

The only docker I had running when this started to happen was a plex server on br0.34.

 

I have this VLAN setup as follows:

519089561_ScreenShot2018-05-24at11_27_32PM.thumb.png.0cde110f0b904a9ada02c4d17870c57d.png

 

Here is one of my call traces.

 

 ------------[ cut here ]------------
 WARNING: CPU: 7 PID: 938 at net/netfilter/nf_conntrack_core.c:769 __nf_conntrack_confirm+0x97/0x4d6
 Modules linked in: vhost_net tun vhost tap kvm_intel kvm md_mod macvlan xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables ip6table_filter ip6_tables xt_nat veth ipt_MASQUERADE nf_nat_m
 fan ipmi_si [last unloaded: tun]
 CPU: 7 PID: 938 Comm: kworker/7:1 Tainted: G        W       4.14.29-unRAID #2
 Hardware name: Supermicro Super Server/X11SSZ-TLN4F, BIOS 2.0b 09/08/2017
 Workqueue: events macvlan_process_broadcast [macvlan]
 task: ffff88000b8e2b80 task.stack: ffffc9000cc18000
 RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6
 RSP: 0018:ffff88085ddc3d30 EFLAGS: 00010202
 RAX: 0000000000000188 RBX: 0000000000004029 RCX: 0000000000000001
 RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81c08828
 RBP: ffff880798b85c00 R08: 0000000000000101 R09: ffff88075c6fb700
 R10: 0000000000000098 R11: 0000000000000000 R12: ffffffff81c8b080
 R13: 0000000000002c8a R14: ffff880522fe4b40 R15: ffff880522fe4b98
 FS:  0000000000000000(0000) GS:ffff88085ddc0000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 00000001025ff000 CR3: 0000000001c0a002 CR4: 00000000003626e0
 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 Call Trace:
 <IRQ>
 ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4]
 nf_hook_slow+0x37/0x96
 ip_local_deliver+0xab/0xd3
 ? inet_del_offload+0x3e/0x3e
 ip_rcv+0x311/0x346
 ? ip_local_deliver_finish+0x1b8/0x1b8
 __netif_receive_skb_core+0x6ba/0x733
 ? xen_cpu_up_prepare_hvm+0x46/0x8f
 ? __update_load_avg_cfs_rq.isra.2+0xb5/0x134
 process_backlog+0x8c/0x12d
 net_rx_action+0xfb/0x24f
 __do_softirq+0xcd/0x1c2
 do_softirq_own_stack+0x2a/0x40
 </IRQ>
 do_softirq+0x46/0x52
 netif_rx_ni+0x21/0x35
 macvlan_broadcast+0x117/0x14f [macvlan]
 macvlan_process_broadcast+0xe4/0x114 [macvlan]
 process_one_work+0x14c/0x23f
 ? rescuer_thread+0x258/0x258
 worker_thread+0x1c3/0x292
 kthread+0x111/0x119
 ? kthread_create_on_node+0x3a/0x3a
 ? SyS_exit_group+0xb/0xb
 ret_from_fork+0x35/0x40
 Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> 0b 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 
 ---[ end trace 4a98e5594110a8be ]---
 mdcmd (696): set md_write_method 1
 
 mdcmd (697): set md_write_method 0
 
 ------------[ cut here ]------------
 

Link to post
On 5/23/2018 at 6:27 PM, bonienl said:

As a test I have now moved a number of my containers from br1 (separate interface) to br0 (shared interface). All these containers have a fixed IP address.

 

Two days passed and still no call traces... despite some heavy use of br0.

Link to post
35 minutes ago, bonienl said:

 

Two days passed and still no call traces... despite some heavy use of br0.

 

As I said, I would be more surprised if you did see macvlan call traces than if you didn't given that you have never seen them on any interface. 

 

I don't know why I, and others, get them.  Clearly, there is something different in our hardware, drivers, configuration, etc. that causes them to occur.  Even though we all have different hardware, there must be some other commonality that caused these call traces. 

 

In my case, it is even more curious that I only see them on br0 (haven't yet tried br1).  Until Limy posted his call trace above with br0.34 (VLAN), I think all others who reported the macvlan call traces also saw them on br0.  I don't know if that is because they never tried assigning IP addresses to dockers on a different interface.

Link to post

I had another look. All call traces happen on reception of a broadcast frame.

 

This would indicate that some other device in the local network sends a broadcast message which causes the Docker networking to fall over (or better said: unable to handle whatever is requested in that broadcast message).

 

When you moved to br0.3, do you have only your unRAID server and your router communicating in this network?

 

Perhaps another test could be to disconnect (temporary) the other/suspicious devices in your local network to which br0 connects.

 

Ps. It reminds me of an issue with one of our cable modem models, which would randomly hang on broadcast traffic. We needed help of broadcom to look at chip level and discover that a particular field went missing in frame exchanges. Filling in that field (software bug) resolved the issue but we lost 6 months ?

 

Edited by bonienl
Link to post
4 minutes ago, bonienl said:

I had another look. All call traces happen on reception of a broadcast frame. 

 

Yes, I have noticed this as well.

 

4 minutes ago, bonienl said:

When you moved to br0.3, do you have only your unRAID server and your router communicating in this network? 

 

On br0 and br0.3 the only devices communicating on these interfaces is the unRAID server/dockers and the router.  Like you, I have a Ubiquiti USG router controlled through the UniFI docker.  I have never changed the default setting on this router that allows all networks to communicate between each other.

Link to post
3 minutes ago, bonienl said:

By default the USG router will not forward broadcast traffic between different networks. It may be that these broadcast messages exist on br0 but not on br0.3.

 

 

I suppose lots of things could generate broadcast messages on br0; another docker running as host/bridge on the unRAID IP?  Another device on my network in the same LAN as the unRAID server? I suppose this is a possibility since br0 is in the same subnet as the server and the rest of my devices whereas br0.3 is a separate network.

 

I eventually want to move more of my docker to separate IP addresses, but, I have resisted due to the call traces; although, at least br0.3 is a a possibility, it appears.

 

Here's my current docker setup (I have Pihole docker disabled as I am running it on a Raspberry Pi currently):

 

image.png.1d50d6d643ca6cfbf713c51763b22948.png

Link to post

So I'm just curious.  I have a pfsense device that is giving out ip addresses on my vlan via dhcp and I also have my vlan br0.34 also giving out a small range of ip addresses for my docker containers.

 

Is it possible that the collisions are occurring because of this?

 

Also, today my whole unraid system went down.  All my dockers that were on br0.34 are now saying it is not available and would not startup.  My system is in a parity check mode, so I will have to wait to stop the array and re-enable it to see if br0.34 comes back as available for my dockers.

 

I'm attaching my diagnostics in case anyone is interested.

brutus-diagnostics-20180527-2354.zip

Link to post

Never noticed this thread before.

 

I also have call trace om my server. Here a snipet of my log

 

May 27 21:23:11 Enterprise kernel: ------------[ cut here ]------------
May 27 21:23:11 Enterprise kernel: WARNING: CPU: 7 PID: 23318 at net/netfilter/nf_conntrack_core.c:769 __nf_conntrack_confirm+0x97/0x4d6
May 27 21:23:11 Enterprise kernel: Modules linked in: veth xt_nat macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs md_mod jc42 bonding igb ptp pps_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc ast ttm aesni_intel aes_x86_64 crypto_simd glue_helper drm_kms_helper cryptd intel_cstate drm intel_uncore mpt3sas ipmi_ssif intel_rapl_perf agpgart i2c_i801 i2c_algo_bit ahci i2c_core syscopyarea sysfillrect sysimgblt fb_sys_fops libahci raid_class scsi_transport_sas ie31200_edac video backlight acpi_power_meter thermal acpi_pad button fan ipmi_si [last unloaded: pps_core]
May 27 21:23:11 Enterprise kernel: CPU: 7 PID: 23318 Comm: kworker/7:0 Not tainted 4.14.40-unRAID #1
May 27 21:23:11 Enterprise kernel: Hardware name: Supermicro Super Server/X11SSM-F, BIOS 2.1a 03/07/2018
May 27 21:23:11 Enterprise kernel: Workqueue: events macvlan_process_broadcast [macvlan]
May 27 21:23:11 Enterprise kernel: task: ffff8807d6432ac0 task.stack: ffffc900058f0000
May 27 21:23:11 Enterprise kernel: RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6
May 27 21:23:11 Enterprise kernel: RSP: 0018:ffff8808779c3d30 EFLAGS: 00010202
May 27 21:23:11 Enterprise kernel: RAX: 0000000000000188 RBX: 0000000000003faf RCX: 0000000000000001
May 27 21:23:11 Enterprise kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81c094bc
May 27 21:23:11 Enterprise kernel: RBP: ffff8801cfe4dd00 R08: 0000000000000101 R09: ffff8801b5997b00
May 27 21:23:11 Enterprise kernel: R10: 0000000000000098 R11: 0000000000000000 R12: ffffffff81c8b080
May 27 21:23:11 Enterprise kernel: R13: 00000000000030e6 R14: ffff880109603400 R15: ffff880109603458
May 27 21:23:11 Enterprise kernel: FS:  0000000000000000(0000) GS:ffff8808779c0000(0000) knlGS:0000000000000000
May 27 21:23:11 Enterprise kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 27 21:23:11 Enterprise kernel: CR2: 0000152a3f8aec60 CR3: 0000000001c0a003 CR4: 00000000003606e0
May 27 21:23:11 Enterprise kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
May 27 21:23:11 Enterprise kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
May 27 21:23:11 Enterprise kernel: Call Trace:
May 27 21:23:11 Enterprise kernel: <IRQ>
May 27 21:23:11 Enterprise kernel: ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4]
May 27 21:23:11 Enterprise kernel: nf_hook_slow+0x37/0x96
May 27 21:23:11 Enterprise kernel: ip_local_deliver+0xab/0xd3
May 27 21:23:11 Enterprise kernel: ? inet_del_offload+0x3e/0x3e
May 27 21:23:11 Enterprise kernel: ip_rcv+0x311/0x346
May 27 21:23:11 Enterprise kernel: ? ip_local_deliver_finish+0x1b8/0x1b8
May 27 21:23:11 Enterprise kernel: __netif_receive_skb_core+0x6ba/0x733
May 27 21:23:11 Enterprise kernel: process_backlog+0x8c/0x12d
May 27 21:23:11 Enterprise kernel: net_rx_action+0xfb/0x24f
May 27 21:23:11 Enterprise kernel: __do_softirq+0xcd/0x1c2
May 27 21:23:11 Enterprise kernel: do_softirq_own_stack+0x2a/0x40
May 27 21:23:11 Enterprise kernel: </IRQ>
May 27 21:23:11 Enterprise kernel: do_softirq+0x46/0x52
May 27 21:23:11 Enterprise kernel: netif_rx_ni+0x21/0x35
May 27 21:23:11 Enterprise kernel: macvlan_broadcast+0x117/0x14f [macvlan]
May 27 21:23:11 Enterprise kernel: ? __switch_to_asm+0x24/0x60
May 27 21:23:11 Enterprise kernel: macvlan_process_broadcast+0xe4/0x114 [macvlan]
May 27 21:23:11 Enterprise kernel: process_one_work+0x14c/0x23f
May 27 21:23:11 Enterprise kernel: ? rescuer_thread+0x258/0x258
May 27 21:23:11 Enterprise kernel: worker_thread+0x1c3/0x292
May 27 21:23:11 Enterprise kernel: kthread+0x111/0x119
May 27 21:23:11 Enterprise kernel: ? kthread_create_on_node+0x3a/0x3a
May 27 21:23:11 Enterprise kernel: ? SyS_exit_group+0xb/0xb
May 27 21:23:11 Enterprise kernel: ret_from_fork+0x35/0x40
May 27 21:23:11 Enterprise kernel: Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> 0b 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 
May 27 21:23:11 Enterprise kernel: ---[ end trace 88da99320af7f2e2 ]---

 

I never looked into this issue till now. I also had issues with this virtual network. It disappears and then all Dockers crashes. I have a separate ip address for every container. Not sure but maybe it's same issue

 

For completeness i will attach my diagnostic files.

enterprise-diagnostics-20180528-2012.zip

Link to post

I've even had the docker crashes taking down a VM before.  I really need this VLAN stuff to work, so hopefully someone with some inside knowledge can replicate our issues and get to the bottom of it.

Link to post
12 hours ago, Limy said:

I'm attaching my diagnostics in case anyone is interested.

 

There are no call traces in your diagnostics.

Initially interface br0 and br0.34 come up and get an IP address assigned

 

At some point in time the complete network connection is lost (disconnected)

May 27 22:30:22 Brutus dhcpcd[1868]: br0: carrier lost
May 27 22:30:22 Brutus dhcpcd[1919]: br0.34: carrier lost
May 27 22:30:22 Brutus avahi-daemon[5752]: Withdrawing address record for 192.168.34.111 on br0.34.
May 27 22:30:22 Brutus avahi-daemon[5752]: Withdrawing address record for 192.168.19.75 on br0.

Then network connection is restored after a few seconds

May 27 22:30:25 Brutus dhcpcd[1868]: br0: carrier acquired
May 27 22:30:25 Brutus dhcpcd[1919]: br0.34: carrier acquired
May 27 22:30:26 Brutus dhcpcd[1868]: br0: rebinding lease of 192.168.19.75
May 27 22:30:26 Brutus dhcpcd[1919]: br0.34: rebinding lease of 192.168.34.111

But Docker hasn't yet the interface br0.34

May 27 22:30:26 Brutus root: MongoDB: Error response from daemon: network br0.34 not found
May 27 22:30:26 Brutus root: Error: failed to start containers: MongoDB
May 27 22:30:26 Brutus root: FoulCheck: Error response from daemon: network br0.34 not found
May 27 22:30:26 Brutus root: Error: failed to start containers: FoulCheck
May 27 22:30:27 Brutus root: plex: Error response from daemon: network br0.34 not found
May 27 22:30:27 Brutus root: Error: failed to start containers: plex

DHCP server on br0.34 assignes IP address later

May 27 22:30:31 Brutus dhcpcd[1919]: br0.34: leased 192.168.34.111 for 7173 seconds
May 27 22:30:31 Brutus dhcpcd[1919]: br0.34: adding route to 192.168.34.0/24
May 27 22:30:31 Brutus dhcpcd[1919]: br0.34: adding default route via 192.168.34.1

Couple of things you need to investigate

  1. Why the connection lost occurred, bad cable?
  2. Does interface br0.34 exist for Docker (do: docker network ls)?
  3. Docker fails in starting the containers, but doesn't make any further attempts. Can you start the containers manually?

 

Link to post

Hi bonienl,

 

Thanks for responding.

 

1.  Cables are new.  Shouldn't be a problem.

 

2.  Output from docker network ls is:

 

Linux 4.14.29-unRAID.
Last login: Mon May 28 12:32:15 -0600 2018 on /dev/pts/3.
root@Brutus:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9e0b48a3b121        bridge              bridge              local
8b32acdb1fc3        host                host                local
33a225596105        none                null                local
root@Brutus:~#
 

3.  All dockers will not start because br0.34 is currently not available.  I receive this error if I try to start a container:

 

1652719630_ScreenShot2018-05-28at1_26_27PM.png.9ff9e9ac4f5d11ffb0bd61845250689f.png

 

If I assign the network as bridge, then I can start the container.  Of course this is not a solution.

 

Since we are on the topic of the br0 and br0.34 missing from my list, I noticed that you mentioned before that 

/etc/rc.d/rc.docker restart

can be used to get that info back into the list.  Is that doable and are they're any side effects?

 

Thanks.

Link to post

The execution error is the result of the missing interface, though the text is misleading.

 

Try

rm /var/lib/docker/network/files/local-kv.db
/etc/rc.d/rc.docker restart

The above restores all networks in a forced way, no side effects

Link to post
1 hour ago, Limy said:

I've even had the docker crashes taking down a VM before.  I really need this VLAN stuff to work, so hopefully someone with some inside knowledge can replicate our issues and get to the bottom of it.

 

I don't use VM's maybe in the future. So I do not have the same experience as you. If I remember correctly I had it twice virtual network and Docker crashed, but maybe it's not related. 

Edited by MvL
Link to post
2 minutes ago, bonienl said:

The execution error is the result of the missing interface, though the text is misleading.

 

Try


rm /var/lib/docker/network/files/local-kv.db
/etc/rc.d/rc.docker restart

The above restores all networks in a forced way, no side effects

The command worked and br0 and br0.34 are now available to my docker's again and also shows this via docker network ls:

 

root@Brutus:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c089f40263d0        br0                 macvlan             local
dd131da6f86f        br0.34              macvlan             local
2c885738c7a4        bridge              bridge              local
7a707e06dbaa        host                host                local
491f9e52747e        none                null                local
root@Brutus:~#

 

I should note that I have seen my interfaces loose carrier and acquire carrier.  But that generally only happens during the boot process.

Link to post
1 hour ago, MvL said:

I also have call trace om my server. Here a snipet of my log

 

 Looks like your call traces are just like all the rest associated with macvlan; they are the result of receiving a network broadcast and, for some reason, not being able to handle it properly.

 

Just out of curiosity, are your dockers assigned IP addresses on br0 or do you have a separate VLAN for dockers?

Link to post
2 minutes ago, Hoopster said:

 

 Looks like your call traces are just like all the rest associated with macvlan; they are the result of receiving a network broadcast and, for some reason, not being able to handle it properly.

 

Just out of curiosity, are your dockers assigned IP addresses on br0 or do you have a separate VLAN for dockers?

I have a separate VLAN (br0.34) for some of my dockers.  When I added VLAN 34 in the Network Settings, I set it up to provide IPv4 addresses automatically from a small pool that does not interfere with the IP address that are assigned via my main pfsense router.

Link to post
8 hours ago, Hoopster said:

 

 Looks like your call traces are just like all the rest associated with macvlan; they are the result of receiving a network broadcast and, for some reason, not being able to handle it properly.

 

Just out of curiosity, are your dockers assigned IP addresses on br0 or do you have a separate VLAN for dockers?

 

Yes, my Dockers ip addresses are on br0.

Link to post
11 hours ago, MvL said:

 

Yes, my Dockers ip addresses are on br0.

 

I have found that I only get the call traces on br0.  If i set up a VLAN (br0.3 on my system) and assign docker IP addresses on this subnet, there are no call traces.  Bonienl thinks that something is generating network broadcasts on br0 which do not exist on br0.3.  This makes sense because I only have dockers and the router communicating on br0.3.  Since br0 is the same subnet as unRAID and all other devices (computers, phones, tablets, laptops, FireTV devices, Raspberry Pi, TV tuners, WiFI access points, etc) on my LAN are also on the same subnet, there could be something generating broadcasts that br0 cannot handle and do not exist on br0.3

 

This does not explain why Limy sees call traces on VLAN br0.34 (unless other network devices are also broadcasting on that subnet).

 

You might find that if you set up a VLAN for your dockers, your call traces go away. 

 

Of course, I am curious as to what type of broadcast causes br0 to choke and what is causing it, but, at least I am call trace free on br0.3

Link to post
3 hours ago, Hoopster said:

Of course, I am curious as to what type of broadcast causes br0 to choke and what is causing it, but, at least I am call trace free on br0.3

 

Maybe run tcpdump and stop it from recording all the common broadcasts it receives that doesn't give any call trace.

Link to post
  • 2 weeks later...

So anyone able to get to the bottom of this.  I did install tcpdump, but honestly, if I start only one docker i.e. Plex running on br0.34, it takes 2 days to start getting call traces.

 

As soon as I stop the docker container the traces stop.

 

Obviously not many people are coming across this problem or are not running vlans.

 

Apparently there is one fix to vlans and bonding (broadcast) in kernel 4.14.42 as shown at this link:

 

https://www.systutorials.com/linux-kernels/498439/linux-4-14-42-release/

 

Another VLAN fix in 4.14.44:

 

https://www.systutorials.com/linux-kernels/498875/linux-4-14-44-release/

 

Lots of VLAN fixes in 4.14.45:

 

https://www.systutorials.com/linux-kernels/500298/linux-4-14-45-release/

 

Basically until this gets resolved my VLAN is pretty much useless.  Call traces will eventually kill my server. :(

Link to post
23 hours ago, Limy said:

Obviously not many people are coming across this problem or are not running vlans.

 

Most of us have the opposite problem.  I only get call traces on br0 (same LAN and unRAID host) and never, at least so far, on a VLAN.  Almost every other user who has reported docker IP address call traces has also received them on br0.  I don't know if they tried VLANs.

Link to post
  • Hoopster changed the title to [6.5.0]+ Call Traces when assigning IP Address to Docker Containers

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.