[Plugin] CA Fix Common Problems


Recommended Posts

Hi all, I could use some guidance with a handware error that got reported by the Fix Comon Problems plugin. I recently installed two brand new Crucial 8Gb DDR4 - 3200 SODIMMS. They seem to work just fine but still the error appears.

 

The last lines of my server syslog state the following:

I have a appdata backup run at 4:00 AM. 

Thanks for any help. 

 

Feb 28 22:07:29 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 28 22:20:16 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 28 22:51:30 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 28 23:35:31 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 02:22:11 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 29 02:53:36 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 03:28:37 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 03:30:37 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 03:33:37 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 03:38:37 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 03:58:38 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth956793e) entered disabled state
Feb 29 04:00:08 JanServer kernel: veth2517137: renamed from eth0
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth956793e) entered disabled state
Feb 29 04:00:08 JanServer kernel: device veth956793e left promiscuous mode
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth956793e) entered disabled state
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth63b01c8) entered blocking state
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth63b01c8) entered disabled state
Feb 29 04:00:08 JanServer kernel: device veth63b01c8 entered promiscuous mode
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth63b01c8) entered blocking state
Feb 29 04:00:08 JanServer kernel: docker0: port 1(veth63b01c8) entered forwarding state
Feb 29 04:00:09 JanServer kernel: docker0: port 1(veth63b01c8) entered disabled state
Feb 29 04:00:12 JanServer kernel: eth0: renamed from veth7630279
Feb 29 04:00:12 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth63b01c8: link becomes ready
Feb 29 04:00:12 JanServer kernel: docker0: port 1(veth63b01c8) entered blocking state
Feb 29 04:00:12 JanServer kernel: docker0: port 1(veth63b01c8) entered forwarding state
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth3f1d6f2) entered disabled state
Feb 29 04:00:18 JanServer kernel: veth1139b6e: renamed from eth0
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth3f1d6f2) entered disabled state
Feb 29 04:00:18 JanServer kernel: device veth3f1d6f2 left promiscuous mode
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth3f1d6f2) entered disabled state
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth6eec70f) entered blocking state
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth6eec70f) entered disabled state
Feb 29 04:00:18 JanServer kernel: device veth6eec70f entered promiscuous mode
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth6eec70f) entered blocking state
Feb 29 04:00:18 JanServer kernel: docker0: port 2(veth6eec70f) entered forwarding state
Feb 29 04:00:18 JanServer kernel: eth0: renamed from vetha3e1fdc
Feb 29 04:00:18 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6eec70f: link becomes ready
Feb 29 04:00:31 JanServer kernel: docker0: port 7(vethc0c74f3) entered disabled state
Feb 29 04:00:31 JanServer kernel: veth1a48294: renamed from eth0
Feb 29 04:00:31 JanServer kernel: docker0: port 7(vethc0c74f3) entered disabled state
Feb 29 04:00:31 JanServer kernel: device vethc0c74f3 left promiscuous mode
Feb 29 04:00:31 JanServer kernel: docker0: port 7(vethc0c74f3) entered disabled state
Feb 29 04:00:31 JanServer kernel: docker0: port 7(veth809f9fd) entered blocking state
Feb 29 04:00:31 JanServer kernel: docker0: port 7(veth809f9fd) entered disabled state
Feb 29 04:00:31 JanServer kernel: device veth809f9fd entered promiscuous mode
Feb 29 04:00:31 JanServer kernel: docker0: port 7(veth809f9fd) entered blocking state
Feb 29 04:00:31 JanServer kernel: docker0: port 7(veth809f9fd) entered forwarding state
Feb 29 04:00:31 JanServer kernel: eth0: renamed from veth7382ce0
Feb 29 04:00:31 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth809f9fd: link becomes ready
Feb 29 04:00:48 JanServer kernel: docker0: port 3(vethab03f8a) entered disabled state
Feb 29 04:00:48 JanServer kernel: veth77e7b46: renamed from eth0
Feb 29 04:00:48 JanServer kernel: docker0: port 3(vethab03f8a) entered disabled state
Feb 29 04:00:48 JanServer kernel: device vethab03f8a left promiscuous mode
Feb 29 04:00:48 JanServer kernel: docker0: port 3(vethab03f8a) entered disabled state
Feb 29 04:00:48 JanServer kernel: docker0: port 3(veth61e8890) entered blocking state
Feb 29 04:00:48 JanServer kernel: docker0: port 3(veth61e8890) entered disabled state
Feb 29 04:00:48 JanServer kernel: device veth61e8890 entered promiscuous mode
Feb 29 04:00:48 JanServer kernel: docker0: port 3(veth61e8890) entered blocking state
Feb 29 04:00:48 JanServer kernel: docker0: port 3(veth61e8890) entered forwarding state
Feb 29 04:00:48 JanServer kernel: eth0: renamed from vethcd45022
Feb 29 04:00:48 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth61e8890: link becomes ready
Feb 29 04:00:55 JanServer kernel: docker0: port 4(veth4b0b45b) entered disabled state
Feb 29 04:00:55 JanServer kernel: veth3ff0292: renamed from eth0
Feb 29 04:00:55 JanServer kernel: docker0: port 4(veth4b0b45b) entered disabled state
Feb 29 04:00:55 JanServer kernel: device veth4b0b45b left promiscuous mode
Feb 29 04:00:55 JanServer kernel: docker0: port 4(veth4b0b45b) entered disabled state
Feb 29 04:00:55 JanServer kernel: docker0: port 4(vethf4ff386) entered blocking state
Feb 29 04:00:55 JanServer kernel: docker0: port 4(vethf4ff386) entered disabled state
Feb 29 04:00:55 JanServer kernel: device vethf4ff386 entered promiscuous mode
Feb 29 04:00:55 JanServer kernel: docker0: port 4(vethf4ff386) entered blocking state
Feb 29 04:00:55 JanServer kernel: docker0: port 4(vethf4ff386) entered forwarding state
Feb 29 04:00:55 JanServer kernel: eth0: renamed from vethabc03a7
Feb 29 04:00:55 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf4ff386: link becomes ready
Feb 29 04:01:02 JanServer kernel: docker0: port 5(vethf3d705e) entered disabled state
Feb 29 04:01:02 JanServer kernel: veth54d32fe: renamed from eth0
Feb 29 04:01:02 JanServer kernel: docker0: port 5(vethf3d705e) entered disabled state
Feb 29 04:01:02 JanServer kernel: device vethf3d705e left promiscuous mode
Feb 29 04:01:02 JanServer kernel: docker0: port 5(vethf3d705e) entered disabled state
Feb 29 04:01:02 JanServer kernel: docker0: port 5(veth334b5ca) entered blocking state
Feb 29 04:01:02 JanServer kernel: docker0: port 5(veth334b5ca) entered disabled state
Feb 29 04:01:02 JanServer kernel: device veth334b5ca entered promiscuous mode
Feb 29 04:01:02 JanServer kernel: docker0: port 5(veth334b5ca) entered blocking state
Feb 29 04:01:02 JanServer kernel: docker0: port 5(veth334b5ca) entered forwarding state
Feb 29 04:01:03 JanServer kernel: eth0: renamed from veth98691d1
Feb 29 04:01:03 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth334b5ca: link becomes ready
Feb 29 04:01:08 JanServer kernel: veth8acd052: renamed from eth0
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth418d98e) entered disabled state
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth418d98e) entered disabled state
Feb 29 04:01:08 JanServer kernel: device veth418d98e left promiscuous mode
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth418d98e) entered disabled state
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth10579a2) entered blocking state
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth10579a2) entered disabled state
Feb 29 04:01:08 JanServer kernel: device veth10579a2 entered promiscuous mode
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth10579a2) entered blocking state
Feb 29 04:01:08 JanServer kernel: docker0: port 6(veth10579a2) entered forwarding state
Feb 29 04:01:08 JanServer kernel: eth0: renamed from veth32588da
Feb 29 04:01:08 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth10579a2: link becomes ready
Feb 29 04:01:15 JanServer kernel: docker0: port 8(vethe4db282) entered disabled state
Feb 29 04:01:15 JanServer kernel: veth3c3ab53: renamed from eth0
Feb 29 04:01:15 JanServer kernel: docker0: port 8(vethe4db282) entered disabled state
Feb 29 04:01:15 JanServer kernel: device vethe4db282 left promiscuous mode
Feb 29 04:01:15 JanServer kernel: docker0: port 8(vethe4db282) entered disabled state
Feb 29 04:01:15 JanServer kernel: docker0: port 8(veth92fb362) entered blocking state
Feb 29 04:01:15 JanServer kernel: docker0: port 8(veth92fb362) entered disabled state
Feb 29 04:01:15 JanServer kernel: device veth92fb362 entered promiscuous mode
Feb 29 04:01:15 JanServer kernel: docker0: port 8(veth92fb362) entered blocking state
Feb 29 04:01:15 JanServer kernel: docker0: port 8(veth92fb362) entered forwarding state
Feb 29 04:01:15 JanServer kernel: eth0: renamed from vethabaa94f
Feb 29 04:01:15 JanServer kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth92fb362: link becomes ready
Feb 29 04:03:38 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 04:17:06 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 29 04:30:39 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 04:33:39 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 04:40:02 JanServer root: Fix Common Problems Version 2024.02.22
Feb 29 04:40:06 JanServer root: Fix Common Problems: Error: Machine Check Events detected on your server
Feb 29 04:51:39 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Feb 29 05:14:40 JanServer flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update

 

janserver-diagnostics-20240229-0946.zip

Link to comment

You are asking about these:

Feb 28 20:56:42 JanServer mcelog: failed to prefill DIMM database from DMI data
Feb 28 20:56:42 JanServer mcelog: Kernel does not support page offline interface
Feb 28 21:02:26 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 28 21:21:52 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 28 22:20:16 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 29 02:22:11 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 29 04:17:06 JanServer kernel: mce: [Hardware Error]: Machine check events logged
Feb 29 04:40:06 JanServer root: Fix Common Problems: Error: Machine Check Events detected on your server

 

Maybe @Squid will know.

Link to comment

@trurl @Squid I just realised that Kagi Search Assistant might now the answer, and yes it did. [Still gettting used to possibilities of AI ;-)]. I post it for future reference.

Noted about not posting logs like I did but just post the zip. 

 

Quote

 

The message "mcelog: failed to prefill DIMM database from DMI data" is a harmless warning that can appear in logs analyzed by the mcelog tool.

When mcelog runs, it attempts to retrieve information about the memory modules (DIMMs) in the system from the Desktop Management Interface (DMI) data provided by the BIOS.

However, not all BIOS implementations provide DMI data in a format that mcelog expects. So in those cases, mcelog will be unable to "prefill" its internal DIMM database with details about the memory sticks. This message simply indicates that mcelog failed to retrieve DIMM information from the BIOS and will instead rely only on CPU-reported machine check errors for its analysis.

The warning does not necessarily indicate any hardware or software issues. Mcelog is still able to monitor for MCEs even without the DIMM database. So this message can generally be ignored as long as the system is functioning normally otherwise.

 

 

Edited by janpeeters
Found the answer.
Link to comment

Bit of a feature request, but is there any way possible to aggregate the notifications for Docker container updates?  It would be considerably more UX friendly to have a single notification saying something like "16 Docker containers have pending updates" instead of 16 notifications popping up on the right that I have to keep hitting "Close Notifications" over and over for.

Thanks for the wonderful plugin!

Link to comment
  • 2 weeks later...

I just moved from 6.9.2 to 6.12.8 and am now seeing errors related to flash drive share.

 

image.thumb.png.48f220643f2ec3c6008c9fb154695d4a.png

 

However, I dont have a share named Flash.  Not that I created at least - 

 

image.thumb.png.cdf255d29f710c53031d2f9fa6c14f98.png

 

But I do see it if I go to /mnt/user - 

image.png.f8d41943ddcdb9812a2ff854ed42528d.png

 

I know the guidance is to manually go to the terminal and run mv flash new-flash to rename it, but I want to confirm this before doing so as I never created this share, it was shipped this way when I initially built my server several years ago.

 

Link to comment
35 minutes ago, dmoney517 said:

I just moved from 6.9.2 to 6.12.8 and am now seeing errors related to flash drive share.

 

image.thumb.png.48f220643f2ec3c6008c9fb154695d4a.png

 

However, I dont have a share named Flash.  Not that I created at least - 

 

image.thumb.png.cdf255d29f710c53031d2f9fa6c14f98.png

 

But I do see it if I go to /mnt/user - 

image.png.f8d41943ddcdb9812a2ff854ed42528d.png

 

I know the guidance is to manually go to the terminal and run mv flash new-flash to rename it, but I want to confirm this before doing so as I never created this share, it was shipped this way when I initially built my server several years ago.

 


you may not have created it directly but if you ever got a top level folder called flash on any drive then a User Share of that name would have automatically been created.   

Link to comment
3 hours ago, itimpi said:


you may not have created it directly but if you ever got a top level folder called flash on any drive then a User Share of that name would have automatically been created.   

Ok...If I navigate to the /mnt/user/flash folder, it is the contents of my Flash drive.

 

Should I do the rename procedure?  Is it "safe" to rename directory since it is the Flash drive?

 

cd /mnt/user

mv flash flash-drive

 

Thanks for the help!

 

Link to comment
8 hours ago, Squid said:

It's not the contents of your flash drive.  That is /boot.  Likely you've set the backup plugin to use /mnt/user/flash as a destination

 

Ok.  Not sure what happened with /mnt/user/flash, but I was assuming whenever I connected to \\NAS\flash over SMB from my windows desktop, it was connecting to /mnt/user/flash.  But I just went into /mnt/user/flash in the terminal and it is basically empty.  so Unraid is sharing the /boot drive over \\NAS\flash by default I guess?

 

Either way, Thanks for the help!  I have moved that "share" to a new name and my Fix common problems is now clean.  Appreciate the help!

Link to comment

Love this plugin but would it be possible to change the way the pop ups work for updates?

 

1693531101_Screenshot2024-03-18at20_25_56.png.58ba2233591449eca658a541c7606685.png

 

Suggest we use the title of the update in the alert or change it from generic panic warning to a 'hey theres an update for a docker' or update for xyz container' 

 

The title of the container update is used in this page so should be able to pull into the notification?

 

188913023_Screenshot2024-03-18at20_26_07.thumb.png.29c102e3a296471ac4b43a5c1b12e837.png

 

Why all this well I & many have the server send notifications on discord and it would be nice to know which are real problems and which are just updates. 🤓🙈 ofcourse it all depends on what's available in the unraid notifications system.

Edited by dopeytree
Link to comment

It's a catch-all notification.  To separate them into individuals, you'd potentially get multiple notifications simultaneously.

 

Once you see the notification, if you're comfortable with ignoring it, then hit Ignore and FCP won't send a notification if that's the only thing found.

 

FWIW, I classify them as warnings or errors, and you have have it not send notifications for anything, or only for errors, or for everything

Link to comment

I am concerned that this was an attempt to hack the server

 

Possible Hack Attempt on Mar 14On Mar 14 there were 17 invalid login attempts. This could either be yourself attempting to login to your server (SSH / Telnet) with the wrong user or password, or you could be actively be the victim of hack attacks. A common cause of this would be placing your server within your routers DMZ, or improperly forwarding ports.

This is a major issue and needs to be addressed IMMEDIATELY

NOTE: Because this check is done against the logged entries in the syslog, the only way to clear it is to either increase the number of allowed invalid logins per day (if determined that it is not a hack attempt) or to reset your server. It is not recommended under any circumstance to ignore this error

 

victower-diagnostics-20240318-1423.zip

Link to comment
24 minutes ago, vherberts said:

Possible Hack Attempt

These seem to be local IPs. Do you know what computer this is?

Mar 14 16:08:45 VicTower login: FAILED LOGIN 1 FROM laptop-7gmqkf7u.lan FOR , Authentication failure

 

Link to comment
On 1/1/2024 at 3:57 AM, rocketeer55 said:

 

 

I am having the same issue after upgrading. Letting it sit for a couple of minutes and then reloading the page just shows the same message. After several hours the scan has not completed.

 

I will not be upgrading to 6.13 immediately after release. I can provide diagnostics or logs if they will help make this plugin work again on 6.12.6.

 

I was earlier having this issue. Then it went away (maybe because I updated Unraid to 6.12.8) but Today it started to happen again. Has the root cause been found yet? 

  • Upvote 1
Link to comment
On 1/6/2024 at 8:19 AM, wndq8h21 said:

 

Same issue here.

 

Resolved for me with these steps:
 

1 - Uninstall plugin

2 - Delete temp file (rm -rf /tmp/fix.common.problems)

3 - Reinstall plugin

4 - Scan

 

Same issue today after updating to 6.12.9 (hanging on "now scanning your system..") and the same steps resolved it again.

Link to comment
On 3/27/2024 at 6:49 PM, wndq8h21 said:

 

Same issue today after updating to 6.12.9 (hanging on "now scanning your system..") and the same steps resolved it again.

Had the same issue. Deleting the tmp folder solved it, no need to uninstall and reinstall.

Link to comment

I recently found that some of my dockers had the wrong timezones configured. Would it be worthwhile to enhance this plugin to catch something like that? Perhaps it could compare the TZ configured for the docker against the TZ configured for unraid?

Link to comment

I installed mcuadros/ofelia(scheduler) via Portainer as its not in the unRAID apps. 
Fix Common Problems states Docker Application scheduler has an update available for it, unRAID does show an update available but it cant be applied ("Configuration not found. Was this container created using this plugin?").

I've manually ensured the latest image is pulled (mcuadros/ofelia:latest:v0.3.10 Latest) and installed. Is it possible to include version numbers in the update available issue?

Something like, 

Docker Application scheduler has an update available for it (Installed v0.3.9 / Latest v0.3.10)

Link to comment

Hey there,
as I logged in today via VPN, I was kinda forced to go the FCP page, as I had red warning popup.
It stated, that on 4 days in a row someone might have tried to hack my unraid server.

"Possible Hack Attempt on Mar 23
On Mar 23 there were 192 invalid login attempts. This could either be yourself attempting to login to your server (SSH / Telnet) with the wrong user or password, or you could be actively be the victim of hack attacks. A common cause of this would be placing your server within your routers DMZ, or improperly forwarding ports.

This is a major issue and needs to be addressed IMMEDIATELY

NOTE: Because this check is done against the logged entries in the syslog, the only way to clear it is to either increase the number of allowed invalid logins per day (if determined that it is not a hack attempt) or to reset your server. It is not recommended under any circumstance to ignore this error  More Information"

Pretty much the same was recorded the following 3 days.

I added the diagnostics zip. I am glad for any help.

 


 

molinode-diagnostics-20240403-1902.zip

Link to comment
21 hours ago, moli said:

Hey there,
as I logged in today via VPN, I was kinda forced to go the FCP page, as I had red warning popup.
It stated, that on 4 days in a row someone might have tried to hack my unraid server.

"Possible Hack Attempt on Mar 23
On Mar 23 there were 192 invalid login attempts. This could either be yourself attempting to login to your server (SSH / Telnet) with the wrong user or password, or you could be actively be the victim of hack attacks. A common cause of this would be placing your server within your routers DMZ, or improperly forwarding ports.

This is a major issue and needs to be addressed IMMEDIATELY

NOTE: Because this check is done against the logged entries in the syslog, the only way to clear it is to either increase the number of allowed invalid logins per day (if determined that it is not a hack attempt) or to reset your server. It is not recommended under any circumstance to ignore this error  More Information"

Pretty much the same was recorded the following 3 days.

I added the diagnostics zip. I am glad for any help.

 


 

molinode-diagnostics-20240403-1902.zip 202.34 kB · 0 downloads

 

Are you running Avast on a machine that's on the same network as your unRaid server?

Link to comment
9 hours ago, RodWorks said:

Your flash drive has possible corruption on /boot/config/unraid_notify.cfg. Post your diagnostics in the forum for more assistance.

 

Where's that file from?

 

It's not corrupted, but FCP considers any .cfg file in /config to be a valid ini file, which it technically isn't because of the quotes and brackets in the comments

Link to comment
On 4/4/2024 at 7:34 PM, Squid said:

Where's that file from?

 

It's not corrupted, but FCP considers any .cfg file in /config to be a valid ini file, which it technically isn't because of the quotes and brackets in the comments

File? Not sure what you're referring to.  It's nothing that I put there intentionally.  Just posted looking for some help as FCP suggested.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.