Preclear plugin


Recommended Posts

5 hours ago, Fuggin said:

Is there a way to see it?

Hey @Fuggin. Giving zero consideration as to why you're not seeing it (as in I don't know enough about it to try debug that) but on mine I also have a Preclear section under the Tools menu, and it shows me log access etc. also so depending on how your Preclear is installed you may have something under there. I happen to be doing a couple of disks myself so screenshots attached to help.

 

 

Screen Shot 2021-08-29 at 08.15.40.png

Screen Shot 2021-08-29 at 08.15.52.png

Link to comment

 

33 minutes ago, trurl said:

Did you look at the preclear log?

 

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdw

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Preclear Disk Version: 1.0.22

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: S.M.A.R.T. info type: default

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: S.M.A.R.T. attrs type: default

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Disk size: 8001563222016

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Disk blocks: 1953506646

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Blocks (512 bytes): 15628053168Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Block size: 4096

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Start sector: 0Aug 31 14:11:59 preclear_disk_ZA12PD7W_13451: Pre-read: pre-read verification started (1/5)....

Aug 31 14:11:59 preclear_disk_ZA12PD7W_13451: Pre-Read: dd if=/dev/sdw of=/dev/null bs=2097152 skip=0 count=8001563222016 conv=noerror iflag=nocache,count_bytes,skip_bytes

 

This is the log from the restart but I guess I didn't.  The report showed smart passed and just said zeroing failed.  I guess I'll have to see what the second try does.  Sorry for not providing the correct information. 

Edited by cbr600ds2
formatting
Link to comment
  • 2 weeks later...

Hi there...Anyone else having this banner pop up? - "Preclear Plugin (2021.04.11): unsupported Unraid version (6.9.2). Please upgrade your OS/plugin or request proper support"

image.thumb.png.3fbbfc6273e9b7c8711ab745ac2f6c7e.png

 

I have checked the plugin and it is up to date:

image.thumb.png.75e0a9499d26e7b4155b039b24f1f52d.png

 

@gfjardim just wondering if this is a known issue?

Regards

Link to comment

I ran three rounds of pre-clear with a brand new WD Red, and got the following error on the third run, full log attached. It says SMART error, but I don't see an error below.

 

Log shows earlier events:

 

Quote

Sep 22 19:47:16 preclear_disk_WD-WX52D31RYVY4_3942: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1372: wait: pid 28787 is not a child of this shell
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: dd - read 4000787030016 of 4000787030016 (0).
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: elapsed time - 7:24:38
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: dd command failed, exit code [127].

 

Not exactly descriptive, but looks more like a pre-clear script error than a drive error? Or is the drive DOA?

 

(Forum doesn't work properly with Ghostery, and code snippet puts it all in one line wtf)

 

Quote

Sep 22 19:47:19 preclear_disk_WD-WX52D31RYVY4_3942: Pre-read: pre-read verification failed! Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Error: Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Power_On_Hours           1        54   Up 53 Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Temperature_Celsius      37       39   Up 2 Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Reallocated_Event_Count  0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Current_Pending_Sector   0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Offline_Uncorrectable    0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: error encountered, exiting...

preclear_disk_WD-WX52D31RYVY4_3942.txt

Edited by Ulvan
Link to comment
2 hours ago, Paul Keating said:

Hi there...Anyone else having this banner pop up? - "Preclear Plugin (2021.04.11): unsupported Unraid version (6.9.2). Please upgrade your OS/plugin or request proper support"

image.thumb.png.3fbbfc6273e9b7c8711ab745ac2f6c7e.png

 

I have checked the plugin and it is up to date:

image.thumb.png.75e0a9499d26e7b4155b039b24f1f52d.png

 

@gfjardim just wondering if this is a known issue?

Regards

I'm on 6.9.2 also and same preclear version and I'm not seeing that warning.

Link to comment
On 9/23/2021 at 11:58 AM, tjb_altf4 said:

I'm on 6.9.2 also and same preclear version and I'm not seeing that warning.

Thanks for that....certainly odd behavior and a first for me. I did have a loss of network access and had to reboot after 3 months of happy running. I will monitor and see if it happens again.

Link to comment
On 9/24/2021 at 1:46 PM, yoban said:

Thanks for that....certainly odd behavior and a first for me. I did have a loss of network access and had to reboot after 3 months of happy running. I will monitor and see if it happens again.

Well... first reboot in 2 months, and I now see the incompatibility warning banner!

 

Funny thing is I've used 6.9.2 and this version of preclear a few times in the last few months and its worked just fine.

I think this might be something driven from CA that @gfjardim needs to make a (hopefully) small update for.

Link to comment

I am getting an Error during Post Read verification, for the 2nd time now. Does anyone know what could cause that? Do you guys think the disc is good to use anyways?

 

############################################################################################################################
#                                                                                                                          #
#                                        unRAID Server Preclear of disk ZCT0DTK5                                           #
#                                       Cycle 1 of 1, partition start on sector 64.                                        #
#                                                                                                                          #
#                                                                                                                          #
#   Step 1 of 5 - Pre-read verification:                                                  [16:18:25 @ 136 MB/s] SUCCESS    #
#   Step 2 of 5 - Zeroing the disk:                                                       [16:59:16 @ 130 MB/s] SUCCESS    #
#   Step 3 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #
#   Step 4 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #
#   Step 5 of 5 - Post-Read verification:                                                                          FAIL    #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#                              Cycle elapsed time: 34:02:11 | Total elapsed time: 34:02:11                                 #
############################################################################################################################


############################################################################################################################
#                                                                                                                          #
#                                        S.M.A.R.T. Status (device type: default)                                          #
#                                                                                                                          #
#                                                                                                                          #
#   ATTRIBUTE                INITIAL  STATUS                                                                               #
#   Reallocated_Sector_Ct    0        -                                                                                    #
#   Power_On_Hours           207      -                                                                                    #
#   Runtime_Bad_Block        0        -                                                                                    #
#   End-to-End_Error         0        -                                                                                    #
#   Reported_Uncorrect       0        -                                                                                    #
#   Airflow_Temperature_Cel  28       -                                                                                    #
#   Current_Pending_Sector   0        -                                                                                    #
#   Offline_Uncorrectable    0        -                                                                                    #
#   UDMA_CRC_Error_Count     0        -                                                                                    #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#   SMART overall-health self-assessment test result: PASSED                                                               #
############################################################################################################################

 

 

 

 

 

Oct  3 18:36:46 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: progress - 90% zeroed @ 106 MB/s
Oct  3 21:11:11 TheArk rc.diskinfo[8155]: SIGHUP received, forcing refresh of disks info.
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: dd - wrote 8001563222016 of 8001563222016 (0).
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: elapsed time - 16:59:13
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: dd exit code - 0
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: zeroing the disk completed!
Oct  3 21:11:14 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: writing signature:    0   0   2   0   0 255 255 255   1   0   0   0 255 255 255 255
Oct  3 21:11:14 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: verifying unRAID's signature on the MBR ...
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: Unraid preclear signature is valid!
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: post-read verification started (1/5)....
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: verifying the beginning of the disk.
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp /tmp/.preclear/sdg/fifo /dev/zero
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd if=/dev/sdg of=/tmp/.preclear/sdg/fifo count=2096640 skip=512 iflag=nocache,count_bytes,skip_bytes
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: verifying the rest of the disk.
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp /tmp/.preclear/sdg/fifo /dev/zero
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd if=/dev/sdg of=/tmp/.preclear/sdg/fifo bs=2097152 skip=2097152 count=8001561124864 iflag=nocache,count_bytes,skip_bytes
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp command failed - disk not zeroed
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd - read 480839204864 of 8001563222016 (7520724017152).
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: elapsed time - 0:44:24
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd command failed, exit code [141].
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 459993513984 bytes (460 GB, 428 GiB) copied, 2525.4 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 220437+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 220436+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 462287798272 bytes (462 GB, 431 GiB) copied, 2537.64 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 221426+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 221425+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 464361881600 bytes (464 GB, 432 GiB) copied, 2549.79 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 222396+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 222395+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 466396119040 bytes (466 GB, 434 GiB) copied, 2561.99 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 223367+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 223366+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 468432453632 bytes (468 GB, 436 GiB) copied, 2574.1 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 224428+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 224427+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 470657531904 bytes (471 GB, 438 GiB) copied, 2587.39 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 225417+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 225416+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 472731615232 bytes (473 GB, 440 GiB) copied, 2599.69 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 226373+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 226372+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 474736492544 bytes (475 GB, 442 GiB) copied, 2611.91 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 227350+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 227349+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 476785410048 bytes (477 GB, 444 GiB) copied, 2624.21 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 228315+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 228314+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 478809161728 bytes (479 GB, 446 GiB) copied, 2636.37 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 229282+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 229281+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 480837107712 bytes (481 GB, 448 GiB) copied, 2648.52 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: post-read verification failed!
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Error:
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.:
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Power_On_Hours           207      241  Up 34
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Runtime_Bad_Block        0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: End-to-End_Error         0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Reported_Uncorrect       0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Airflow_Temperature_Cel  28       32   Up 4
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Current_Pending_Sector   0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Offline_Uncorrectable    0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Oct  3 21:55:56 TheArk preclear_disk_ZCT0DTK5[26100]: error encountered, exiting...

 

Link to comment
  • 3 weeks later...
On 7/21/2019 at 5:05 PM, Forusim said:

Hello @gfjardim

 

When a drive is mounted as unassigned (not even shared), your plugin issues "lsof -- /mnt/disks/tempdrive" command every few seconds.

This causes remarkable CPU spikes (30% out of 400%) via process "php" and don´t let that drive ever spin down.

 

Would it be possible not to issue this command when no preclear activity takes place?

As workaround I have to uninstall this plugin, when not used.

 

This enhancement is much appreciated.

 

@gfjardim

Hey this is happening to me now as well [2 years later]. Unraid 6.9.3, plugin version 2021.04.11 [up-to-date]

 

I noticed 'lsof' was pegging an entire cpu to 100%; investigated and it's coming from /etc/rc.d/rc.diskinfo

 

From /var/log/diskinfo.log , it looks like this benchmark is being run continuously in a loop, with no delay/sleep between iterations

Mon Oct 18 22:17:49 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.509439s.
Mon Oct 18 22:18:03 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.332904s.
Mon Oct 18 22:18:47 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.167924s.
Mon Oct 18 22:19:00 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.680031s.
Mon Oct 18 22:19:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.882862s.
Mon Oct 18 22:19:58 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.628200s.
Mon Oct 18 22:20:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.887803s.
Mon Oct 18 22:20:57 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.041714s.

 

 

Those drives are mounted by Unassigned Devices. I'm not sure what this benchmark was trying to accomplish, as I don't have any preclear's running, and haven't since a reboot.

Link to comment
  • 2 weeks later...
On 3/10/2021 at 12:03 PM, stepmback said:

That was annoying.

 

I had not rebooted my server in about 160 days. Everything was fine so why bother. I just rebooted the server and now i am able to pre-clear the drive. Bug?

just wanted to say thanks, i don't know how long i would have futz around before I tried a reboot.  lol.

Link to comment
  • 2 weeks later...

Built a new server for myself and am having an odd issue with the preclear plugin and specifically using the gfjardim script.  When I kick it off to do a preclear on this new machine I get the below message in the preview window.

 

/usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 514: 0 * 100 /         0 : division by 0 (error token is "0 ")

 

The main unraid gui shows that the preclear is finished but the machine is definitely still doing something as there is a dd process that is using CPU.

 

If I start the process using the Joe. L script then the process starts as expected and works.

 

I am running 6.9.2, the newest version of the plugin, and all that jazz.

I do not particularly care which script I need to use when running preclear but figured I would mention it here in this thread to see if anyone has seen similar.

 

The new build is a Ryzen 7 5700G, 16GB of RAM on an MSI B550i Gaming Edge Max motherboard.

I have a much older build that is my test system that is running something like an Intel Q6600, 12GB of RAM on a supermicro motherboard and the gfjardim script seems to work perfectly fine on that one.

Link to comment
12 minutes ago, darkside40 said:

From that time on also nothing happened in his Github Repo, so i am a little bit worried.

Preclear is not necessary any longer, but if you feel you absolutely have to have it, there is a preclear Docker container.

Link to comment

I'm seeing 429MB/s on the pre-read stage of preclear on my WDC blue 4TB hard drive. (info is in the image below)  This drive is a shingle variety.  This isn't read cache, it pretty much has stayed at this speed the whole time.    Is this why the read speeds are so high?  I'll post a follow up on my write speeds.  Is this normal?  I've only seen SSD's have read speeds like this!

1763704856_2021-11-2613_30_15-Tower_PreclearMozillaFirefox.thumb.png.f762a4ce3d7e885a4d811b81db11394e.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.