[solved - I hope] Audio Latency vs win10 vs unraid


Recommended Posts

I have several problems with my unraid server OR kvm. I will try to describe and mention some possible solutions.

 

All problems may be connected, but I will list them seperately.

 

Hardware:

Intel Core i7-5820K, Socket-LGA2011-3

ASUS X99-A, Socket 2011-3

ASUS DC2 OC NVIDIA GeForce GTX 970 4GB

Crucial Ballistix Sport 16GB 2400MHz DDR4 DIMM 288-pin

 

Unraid 6.1.9.

 

Running win10 and xubuntu in kvm. Win10 with usb and GPU passthrough. xubuntu as a straight vm.

 

1.

The server stops responding after a few days, or 2 weeks, or anything between.

When this happens it also seems to spam my network, so all other machines and mobiles are getting poor speed.

I have not been able to see anything in the logs.

I suspect the win10 vm to be the problem.

 

2.

win10 vm restart (from within win10) can bring the whole server down. This happened to me twice yesterday, as I was trying to install ond uninstall some audio drivers. I was not able to contact the unraid.local from my laptops browser.

(This might be the reason the server dies from time to time. Like windows update booting the system and the server dies all together)

 

3.

win10 audio latency trouble. pops and skips.This is entering a world of pain it seems. The web is flooded with these problems, and it seems to happen for all types of reasons. (On a general. Not related to unraid) This is where I have put in most of my effort, and maybe found a solution.

I have a focusrite 2i4 usb soundcard.

Problem:

Pops and skips while playback from spotify, youtube videos etc. While playback for a longer period it suddenly gets worse (10 second soundclip takes 20-30 seconds to playback. Like playing back with a jog wheel), to the point of videos not playing, and spotify halting.

Yesterday I discovered the program "latencymon", and lots of people blaming network drivers as the cause of this.

Latencymon showed spikes of latency during playback.

Today I connected to win10 using teamviewer, and run latencymon again. Now it whent through the roof with reported latency.

AHA! So under mayor network load the problem gets much worse? That gives me reason to belive the network drivers are the issue here as well.

 

As my system runs on unraid, with virtual network, I guess it's not that simple to swap out the network driver?

If I buy a usb network adapter, will I get this working in my win10 vm?

Using the integrated network card for unraid server, and no network passthrough to the vm.

Just the usb device, with native drivers, set up in my vm.

I understand this will affect speed between host and vm (if it's possible to get working), but I can live with that if it makes my other trouble go away.

 

Am I the only one having these issues?

 

Other ideas are welcome!

 

Link to comment

1. How much ram did you allocate to your vms

 

2. Which cores do your vms use

 

3. Did you pin/isolate any cores?

 

4. I have an X99 Deluxe which uses a very similar ethernet controller and dont have any problems with high latency under high network load. (guest: virtio | host: Intel I218-V LAN / I211-AT LAN Controller with 802.3ad link aggregation). I dont think that your ethernet drivers are the reason why you have high latency.

 

EDIT: You can try updating to the beta version (i am currently using Beta 21). The beta is very stable (the only time it crashed was when i accidentally allocated too much ram to a vm) and uses a newer version of the visualization software which might fix your performance / latency problems.

Link to comment

1.

win10: 8GB ram

xubuntu: 2GB ram

 

2.

win10: cores 2,3,4,5,6,7,8,9

xubuntu: cores 10, 11

 

3.

Not sure what you mean?

 

4.

I have no issues using the built in audio interface of the main board. (I think....) Gaming also seems to be no problem. It's just windows sounds, music, videos and such.

 

 

VM XML:

<domain type='kvm' id='2' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Windows 10 virtual desktop</name>
  <uuid>99407073-5e6b-c02b-be14-9c02ff5523e7</uuid>
  <description>windows 10 lokal</description>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='7'/>
    <vcpupin vcpu='6' cpuset='8'/>
    <vcpupin vcpu='7' cpuset='9'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisks/Windows 10 virtual desktop/vdisk1.img'/>
      <backingStore/>
      <target dev='hda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisks2/Windows 10 virtual desktop/vdisk2.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:4a:6b:f1'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows 10 virtual desktop.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=00:1b.0,bus=root.1,addr=01.0'/>
  </qemu:commandline>
</domain>

Link to comment

Stop All VMs

 

Step 1. Pin CPU Cores

Go to main

Click on „Flash“

ipCjZpW.png

Change / add „append isolcpus=1-5“ to the boot parameters

nfv63qA.png

 

Reboot

 

Step 2. Change the number of cores the VMs have

Win10 VM:

Cores: 2, 3, 4, 5

Xubuntu VM:

Cores 0, 1

 

Step 3. Edit the config files

 

  (Do this AFTER step 2 or it wont work (unraid removes manual config changes when editing a vm using the gui))

To edit a VMs XML config you have to click the icon of the vm and select “Edit XML”

 

Win10 VM:

Find this line:

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu> 

And change it to this

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu> 

 

Xubuntu VM:

Find this line:

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='1' threads='2'/>
  </cpu> 

And change it to this

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu> 

 

Start the VMs and test if the problems are still there.

 

If this didnt fix all problems you can try updating to the newest beta version.

Link to comment

Well, I already got

win10:

<cpu mode='host-passthrough'>
    <topology sockets='1' cores='8' threads='1'/>
</cpu>

 

xubuntu:

<cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
</cpu>

 

So I guess step 1 and 2 would be:

 

1. add „append isolcpus=3-12“ to the boot parameters

 

2. leave the core setup as today. (win10: 2-9 and xubuntu: 10-11)

 

3. leave topology as today.

 

Correct?

 

Or is it some hidden point in using fewer cores?

 

This thread mentions:

"I just made the change to enable MSI interrupts in Windows for this GPU thinking this may solve my issue" and that seemed to work. I will dig into that later...

https://lime-technology.com/forum/index.php?topic=42828.0

 

Link to comment

No!!!

 

1. Your CPU only has 6 cores and 2 Threads per core and you want to allocate 8 Cores to one vm. (First Core = 0, Last Core = 5 / First Core With HT = 0, Last Core With HT = 11. Remember: HT Cores ARENT real cores!)

 

2. Only pin cores 1-5

 

Use this config:

 

append isolcpus=1-5

 

win10: Cores 2, 3, 4 and 5 / <topology sockets='1' cores='4' threads='1'/>

 

xubuntu: Cores 0 and 1 / <topology sockets='1' cores='2' threads='1'/>

 

If this works you can try adding the Hyperthreaded cores to the win10 vm cores and change <topology sockets='1' cores='4' threads='1'/> to <topology sockets='1' cores='4' threads='2'/>

 

Dont forget to check the thread pairs in "Tools -> System Devices" If you want to allocate the HT cores only allocate thread pairs or you will have extemely high latency.

 

EDIT: If this doesent work try this config:

 

1. The Boot Parameter

append isolcpus=1-11

 

2. Change the cores to 2, 3, 4 and 5 in the gui

 

3. The VM XML config

<cputune>

    <vcpupin vcpu='0' cpuset='2'/>

    <vcpupin vcpu='1' cpuset='3'/>

    <vcpupin vcpu='2' cpuset='4'/>

    <vcpupin vcpu='3' cpuset='5'/>

    <emulatorpin cpuset='THE THREAD PAIRS'/>

  </cputune>

  <resource>

    <partition>/machine</partition>

  </resource>

  <os>

    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='4' threads='1'/>

  </cpu>

 

Change the red text to the thread pairs of the cores 2, 3, 4 and 5. Example: <emulatorpin cpuset='8,9,10,11'/> or <emulatorpin cpuset='8-11'/>

Link to comment

Looks like the latency issue is fixed. I have to use the machine a few more days to be shure, but it really sounds promising. No pops and skips so far, and it used to be quite bad.

 

I am now running this config:

 

[select]
append isolcpus=1-5

win10: Cores 2, 3, 4 and 5 / <topology sockets='1' cores='4' threads='2'/>

xubuntu: Cores 1 / <topology sockets='1' cores='1' threads='1'/>

 

The task manager in win10 shows:

sockets: 1

Virtual processors: 4

 

Is it somewhere I can check the hyperthread status, to see if it runs 8 threads?

 

zce385k.png

 

Interresting:

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.

 

Process with highest pagefault count:                spotify.exe

 

Link to comment
Is it somewhere I can check the hyperthread status, to see if it runs 8 threads?

 

Task Manager -> Performance -> CPU -> Right click the graph -> Change graph to -> Logical processors

 

4 graphs -> 4 cores (without ht) 8 graphs -> 4 cores (with ht)

Link to comment

It's just showing 4 graphs at the moment.

 

Some more tweaking to do apparently...

 

So

 

append isolcpus=1-11

 

and

 

<emulatorpin cpuset='8,9,10,11'/>

 

while topology still

 

<topology sockets='1' cores='4' threads='1'/>

 

then?

 

5IFxZSw.png

Link to comment

in the edit vm gui add the cores 8,9,10 and 11

 

<topology sockets='1' cores='4' threads='2'/>

 

Remember to to change the emulatorpins and the topology settings again (the gui automatically removes manual config edits).

 

If you experience frame drops or lower fps than before change isolcpus back to 1-5.

Link to comment

Yes. I will try more options tonight.

 

I like to stick with XML-editing when I have added custom stuff. Like you say, the gui removes "random" settings.

 

I'll just add

 

<vcpupin vcpu='4' cpuset='6'/>
<vcpupin vcpu='5' cpuset='7'/>
<vcpupin vcpu='6' cpuset='8'/>
<vcpupin vcpu='7' cpuset='9'/>

 

This means I'm more or less back to my original XML, except

 

<topology sockets='1' cores='4' threads='2'/>

 

instead of

 

<topology sockets='1' cores='8' threads='1'/>

 

and starting unraid with the isolcpus setting.

 

I read somewhere that the topoligysetting didn't matter much as long as the numbers multiplied equaled 8. (but I guess worth trying)

Link to comment

Reading forum posts ...

 

I see something is misunderstood.

 

isolcpus shoud point at the cpus that will be used exclusively by unrar, not the other way around.

 

That means in my setup it should be

 

append isolcpus=0,6

 

Ref jonp in this thread: https://lime-technology.com/forum/index.php?topic=45379.0

 

UnRAID OS is not restricted to any particular CPUs and will utilize whichever are most available automatically (gotta love Linux!).

 

If you wish to reserve CPUs so that unRAID cannot touch them, you can add your following parameter after the "append" in your syslinux.cfg file:  isolcpus=

 

After the =, you can enter which logical CPUs you have wish to isolate in the form of 0,1,2 or 0-2 or a combo thereof such as 0-2,4,7

 

So to limit unRAID to only cores 0 and 1 for example:

 

isolcpus=0-1

 

OR

 

isolcpus=0,1

 

One other thing that isnot quite right is

 

<emulatorpin cpuset='8,9,10,11'/> 

 

The emulatorpin setting should list the cpus that will take care of the VM overhead, and not be the pairing logical cpu, if I understand it correctly.

 

 

I have tried lots of settings now, but I'm not there yet.

The closest yet is the setup with only 4 cpus.

 

well, off to reboot again.

Link to comment

Reading forum posts ...

 

I see something is misunderstood.

 

isolcpus shoud point at the cpus that will be used exclusively by unrar, not the other way around.

 

That means in my setup it should be

 

append isolcpus=0,6

 

Ref jonp in this thread: https://lime-technology.com/forum/index.php?topic=45379.0

 

UnRAID OS is not restricted to any particular CPUs and will utilize whichever are most available automatically (gotta love Linux!).

 

If you wish to reserve CPUs so that unRAID cannot touch them, you can add your following parameter after the "append" in your syslinux.cfg file:  isolcpus=

 

After the =, you can enter which logical CPUs you have wish to isolate in the form of 0,1,2 or 0-2 or a combo thereof such as 0-2,4,7

 

So to limit unRAID to only cores 0 and 1 for example:

 

isolcpus=0-1

 

OR

 

isolcpus=0,1

 

One other thing that isnot quite right is

 

<emulatorpin cpuset='8,9,10,11'/> 

 

The emulatorpin setting should list the cpus that will take care of the VM overhead, and not be the pairing logical cpu, if I understand it correctly.

 

 

I have tried lots of settings now, but I'm not there yet.

The closest yet is the setup with only 4 cpus.

 

well, off to reboot again.

 

Jonp's post is bit confusing.  isolcpus isolates the cpus from unRAID use, it does not reserve them for unRAID.

 

Read this post and see if it doesn't help you out https://lime-technology.com/forum/index.php?topic=49051.msg470454#msg470454

Link to comment

Yes, I see it can be read two ways, and found out later what I stated was wrong.

 

What about emulatorpin?

 

If I'm doing this:

<vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <emulatorpin cpuset='8-11'/>
  </cputune>
...
<cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>

 

How am I setting up isolcpus?

 

append isolcpus=2,3,4,5,8,9,10,11

 

or

 

append isolcpus=2,3,4,5

Link to comment

Yes, I see it can be read two ways, and found out later what I stated was wrong.

 

What about emulatorpin?

 

If I'm doing this:

<vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <emulatorpin cpuset='8-11'/>
  </cputune>
...
<cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>

 

How am I setting up isolcpus?

 

append isolcpus=2,3,4,5,8,9,10,11

 

or

 

append isolcpus=2,3,4,5

 

I don't understand your cpu setup but you are not assigning both pairs.  Both pairs need to be isolated and pinned to the VM.

 

In your case:

append isolcpus=2,3,4,5

 

You are not assigning the cpus correctly.  Go to the unRAID dashboard and show me the cpu display so I can see the cpu pairs and help you with the pinning.

Link to comment

Hmm,

 

the cpu pairs are:

 

0,6

1,7

2,8

3,9

4,10

5,11

 

so, win10 is pinned to 4 "main" cores.

2

3

4

5

 

and emulatorpin to the corresponding pairs: 8, 9, 10, 11.

 

I tried this after reading this advice in another thread here.

 

0,6,1,7 and 8,9,10,11 is not reserved by isolcpus.

 

My other VM (ubuntu) I'm pinning to 1 with emulatorpin 7.

This is then handled by unraid, that also has cores 0 and 6 for it's own use.

 

 

I now run this setup with

 

append isolcpus=2,3,4,5

 

I also used the MSI_tool to enable MSI on all lines mentioning graphics or audio.

 

The result is much better than before. Some very few latency spikes. (I will test this better tonight)

 

 

The above is at least what I THINK I'm accomplish by the settings mentioned.  ???

Link to comment

Hmm,

 

the cpu pairs are:

 

0,6

1,7

2,8

3,9

4,10

5,11

 

so, win10 is pinned to 4 "main" cores.

2

3

4

5

 

and emulatorpin to the corresponding pairs: 8, 9, 10, 11.

 

I tried this after reading this advice in another thread here.

 

0,6,1,7 and 8,9,10,11 is not reserved by isolcpus.

 

My other VM (ubuntu) I'm pinning to 1 with emulatorpin 7.

This is then handled by unraid, that also has cores 0 and 6 for it's own use.

 

 

I now run this setup with

 

append isolcpus=2,3,4,5

 

I also used the MSI_tool to enable MSI on all lines mentioning graphics or audio.

 

The result is much better than before. Some very few latency spikes. (I will test this better tonight)

 

 

The above is at least what I THINK I'm accomplish by the settings mentioned.  ???

 

It's still not right.  You have to pin cpu pairs, not one that you think is the main core.  The architecture is one core has two threads, not a core and one thread.

 

Do this:

 

isolate 2-5,8-11

 

This isolates both pairs of each core for your VM.

 

assign cpus 2,3,4,5,8,9,10,11 to your Win 10 VM.

 

Then emulatorpin 0,1,6,7

 

assign cpus 0,1,6,7 to ubuntu.

 

Edit the VM and set your cpus.  Then edit the xml amd add the emulatorpin to the Windows VM.  Don't make any other changes to the xml.

Link to comment

Thanks dlandon!

 

I will try this later tonight.

 

(And I'll save my current XML as well. I need some custom coding to get my usb controller passed through.)

 

And I'm also running dockers Plex Media Server and SABnzbd, so I guess I'll keep Ubuntu on one core (two threads) to be sure.

Link to comment

Sorry to interrupt you guys, but I've got a question in case you tested it.

 

@dlandon: how much usage the emulator requires (emulator pin cores)? would it be fine if I share the same cores for Plex transcoder and for the emulatorpin of my VMs?

 

Yes.  But only emulatorpin on latency sensitive VMs.  Normally the cpus assigned to the VM are also used for emulator tasks.  It's only necessary when a VM suffers from latency issues (pausing, stuttering, etc) when serving media or gaming.

 

Don't get carried away with assigning cpus.  You don't normally need to do any special cpu assigning or isolating cpus.  I also notice a lot of people assigning more than 4 cpus to VMs.  I see no reason that any VM needs that kind of horsepower.  I've also read that assigning more than 4 can also cause problems.

 

If you are seeing latency issues, don't think that assigning more cpus will solve the problem.  The issue is more with assignment of the cpus.

Link to comment

Thanks dlandon!

 

I will try this later tonight.

 

(And I'll save my current XML as well. I need some custom coding to get my usb controller passed through.)

 

And I'm also running dockers Plex Media Server and SABnzbd, so I guess I'll keep Ubuntu on one core (two threads) to be sure.

 

I'm not sure why you need so many cpus for a Win 10 VM.  Four total cpus should be more than enough.

Link to comment

Sorry to interrupt you guys, but I've got a question in case you tested it.

 

@dlandon: how much usage the emulator requires (emulator pin cores)? would it be fine if I share the same cores for Plex transcoder and for the emulatorpin of my VMs?

 

Yes.  But only emulatorpin on latency sensitive VMs.  Normally the cpus assigned to the VM are also used for emulator tasks.  It's only necessary when a VM suffers from latency issues (pausing, stuttering, etc) when serving media or gaming.

 

Don't get carried away with assigning cpus.  You don't normally need to do any special cpu assigning or isolating cpus.  I also notice a lot of people assigning more than 4 cpus to VMs.  I see no reason that any VM needs that kind of horsepower.  I've also read that assigning more than 4 can also cause problems.

 

If you are seeing latency issues, don't think that assigning more cpus will solve the problem.  The issue is more with assignment of the cpus.

 

Thanks a lot for the reply!

Link to comment
  • space changed the title to [solved - I hope] Audio Latency vs win10 vs unraid

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.