Solr associations

2010-04-22 Thread Fornoville, Tom
Hi all,

I've had some mailserver problems and it seems this mail didn't reach
the mailing list.
If it did my apologies for the double post.


The last couple of days we are thinking of using Solr as our search
engine of choice. Most of the features we need are out of the box or can
be easily configured. There is however one feature that we absolutely
need that seems to be well hidden (or missing) in Solr.

I'll try to explain with an example. We have lots of documents that are
actually businesses:


  Apache
  1
  ...


  McDonalds
  2
  ...



In addition we have another xml file with all the categories and
synonyms:


  software
  IT


  fast food
  restaurant



We want to associate both businesses and categories so we can search
using the name and/or synonyms of the category. But we do not want to
merge these files at indexing time because we should update the
categories (adding.remioving synonyms...) without indexing all the
businesses again.

Is there anything in Solr that does this kind of associations or do we
need to develop some specific pieces?

All feedback and suggestions are welcome.

Thanks in advance,

Tom



Re: Re: Combining Dismax and payload boosting

2010-04-22 Thread amit mor
Hi Erik,

I am answering on behalf of Victoria, my team leader.

We needed (note the past tense!) to use the nice query parsing machinery
of dismax as well as its disjunction scoring. We also wanted to flavor
each sub-query dismax generates with payload values (for boosting) that
were encoded at index time. 

For that purpose we basically extended QueryParser with a new subclass
that overrides the 'newTermQuery' method to return a PayloadTermQuery
object instead of a TermQuery object.

As to your question, the whole purpose of this is to do our version
'weighted synonyms'. Think about this: Bill is still William, but
more/less common, so getting a document with a guy named William, when
you searched for Bill is still good, but not the same in terms of
scoring.

I would be more than willing to add our code to Solr source, if it is of
any interest, and of course if there wouldn't be any legal issues with
mgmt.

Thanks,
Amit

> Victoria,
> 
> An example of specifically what types of queries you'd like to do  
> would be helpful.
> 
> Using nested queries you can leverage dismax and your custom query  
> parser together, which may be what you're looking for.  See this  
> article for details on nested queries: 
>  
>  >
> 
> Also, I'm curious about your query parser that uses payloads.   How  
> does it work?   There's a PayloadTermQuery parser attached to the  
> following issue, and I'm wondering how your work might align with that  
> implementation: 
> 
>   Erik

> > Hi,
> > We are using payloads for score boosting. For this purpose we've
> > implemented custom boosting QueryParser and similarity function. We
> > followed
> > http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/.
> > 
> > On the other hand, we'd like to use dismax query handling because of
> > its benefits in several fields search. 
> > How can we make dismax use our custom QueryParser?  
> > 
> > Thanks!
> > 
> 
> 




Re: Best Open Source

2010-04-22 Thread findbestopensource
Thank you Dave and Michael for your feedback.

We are currently in beta and we will fix these issues sooner.

Regards
Aditya
www.findbestopensource.com



On Tue, Apr 20, 2010 at 3:01 PM, Michael Kuhlmann <
michael.kuhlm...@zalando.de> wrote:

> Nice site. Really!
>
> In addition to Dave:
> How do I search with tags enabled?
> If I search for "Blog", I can see that there's one blog software written
> in Java. When I click on the Java tag, then my search is discarded, and
> I get all Java software. when I do my search again, the tag filter is
> lost. It seems to be impossible to combine tag filters with search.
>
> -Michael
>
> Am 20.04.2010 11:00, schrieb solai ganesh:
>  > Hello all,
> >
> > We have launched a new site hosting the best open source products and
> > libraries across all categories. This site is powered by Solr search.
> There
> > are many open source products available in all categories and it is
> > sometimes difficult to identify which is the best. We identify the best.
> As
> > a open source users, you might be using many opensource products and
> > libraries , It would be great, if you help us to identify the best.
> >
> > http://www.findbestopensource.com/
> >
> > Regards
> > Aditya
> >
>
>


dismax request Handler with OR operator

2010-04-22 Thread Ranveer Kumar
Hi,

Recently I change my search handler to dismax from standard. But I am facing
problem to get result by OR operator. I am getting AND operator result only.
I think somewhere I am missing configuration.
following is my configuration :

schema.xml : ---


solrconfig.xml:---
 
 
  dismax
  explicit
  0.01
 
title^5.0 text^0.5
  
 
   


solrj :
query.setQueryType("dismax");
query.setQuery(qq);
//query.set("qf", "title^5.0+text^0.3");
query.setParam("qf", "title^20.0 + classification_name^5.0 + text^0.3");

please help..

thanks


Re: dismax request Handler with OR operator

2010-04-22 Thread Ahmet Arslan

> Recently I change my search handler to dismax from
> standard. But I am facing
> problem to get result by OR operator. I am getting AND
> operator result only.
> I think somewhere I am missing configuration.
> following is my configuration :
> 
> schema.xml : ---
> 


Dismax does not use default operator. You need to set mm[1] value which has a 
default of 100% (all clauses must match

[1]http://wiki.apache.org/solr/DisMaxRequestHandler#mm_.28Minimum_.27Should.27_Match.29


  


Re: Minimum Should Match the other way round

2010-04-22 Thread MitchK

Hi Hoss,

thank you for joining the discussion :).

2) If I understood the API-documentation right, the behaviour of the
FieldQParser depends exactly on what I've defined in my analyzer. 

3) This seems to be a very good solution. I don't come from the Lucene
corner and I started developing with Solr, so maybe some of my thoughts are
wrong. But as I understood, you would suggest the following:

Subclass e.g. LuceneQParser -> let's call the class LuceneLengthQParser

In the LuceneLengthQParser I instantiate a FieldQParser. This parser uses
some filters which I have defined in my titleLength-field to retrive the
QueryLength.
Now I can forward QueryLength to a setMaxLength(int QueryLength, int
increment)-method.
The returnValue can appended to the QueryString. Afterwards the original
LuceneQParser can do his job.

To make sure that people can extend every parser they like, I would like to
seperate the "rule-part" of my custom parser, so that they can easily extend
other parsers by including the length-retrival-part.
 
I think, if I understood everything right, I will subscribe to the developer
list and open an issue for that.

--
However, there are two another understanding - questions:
What does WDF mean and what does HTE stand for?

Thank you very much!

Kind regards
- Mitch
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p742797.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr associations

2010-04-22 Thread MitchK

Hi Tom,

why not defining three fields in your schema.xml?

One field is the name-field (e.g. Apache or McDonald's) and another field is
the category-name. The last field contains on the category id.

If you define a synonymFilter for query-time-filtering, you don't need to
reindex the whole index. Unfortunately you need to restart your
Solr-instance (or was it only the core? I don't know...) to make recently
added synonyms available at search-time. 

If you set expand = true, a query like "restaurant" could also match fast
food.

The synonym.txt for this usecase looks like:
restaurant => fast food, restaurant

Hope this helps
- Mitch
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-associations-tp742443p742837.html
Sent from the Solr - User mailing list archive at Nabble.com.


Facet query problem

2010-04-22 Thread ZAROGKIKAS,GIORGOS
Hi there 

 

I 'm a new solr user 

With some problems with facets

 

I have Index a field  like that 

9

An I get ranges between 

1 an 35

When I Use fq=A00053:[16 TO 30]

I get an results between 1 and 35 

It looks like it ignores the second number of each value in my Range 

 

How can I solve that?

 

Thanks in advance

 

 

 

 

 



Re: Facet query problem

2010-04-22 Thread Erik Hatcher
I'm taking an educated guess that this field is a "string" field.  In  
that case, range queries are lexicographical (1, 10, 2, 3, 4...).  Use  
a numeric field type to have range queries work properly.  See Solr's  
example schema.xml for details on these types and range queries.


Erik



On Apr 22, 2010, at 10:00 AM, ZAROGKIKAS,GIORGOS wrote:


Hi there



I 'm a new solr user

With some problems with facets



I have Index a field  like that

9

An I get ranges between

1 an 35

When I Use fq=A00053:[16 TO 30]

I get an results between 1 and 35

It looks like it ignores the second number of each value in my Range



How can I solve that?



Thanks in advance















Re: Retrieve time of last optimize

2010-04-22 Thread Shawn Heisey

On 4/21/2010 1:24 PM, Shawn Heisey wrote:
Is it possible to issue some kind of query to a Solr core that will 
return the last time the index was optimized?  Every day, one of my 
shards should get optimized, so I would like my monitoring system to 
tell me when the newest optimize date is more than 24 hours ago.  I 
could not find a way to get this.  The /admin/cores page has a lot of 
other useful information, but not that particular piece.


I have found some other useful information on the stats.jsp page, like 
the number of segments, the size of the index on disk, and so on.  Still 
have not been able to locate the last optimize date, which would simply 
be the timestamp on the earliest disk segment.


Thanks,
Shawn



Re: Kinda-sorta realtime?

2010-04-22 Thread Jason Rutherglen
> the merge settings (and maybe the MergeScheduler) to ensure that your
> pathalogical worst case scenerio (ie: a really big merge) doens't block
> your commits.

ConcurrentMergeScheduler should be handling the thread priorities more
intelligently in Lucene 3.1.
https://issues.apache.org/jira/browse/LUCENE-2164

On Wed, Apr 21, 2010 at 3:55 PM, Chris Hostetter
 wrote:
>
> : We don't mind having an occasional long delay between commiting data and
> : being able to find that data, as long as the average is somewhere south of a
> : second or so, and Lucene's NRS looks like it will provides that level of
> : 'realtimeness'.
>
> an average 500ms "lag until visible" is totally feasible with Solr 1.4 --
> provided you aren't using replication (or are using super fast
> replication) and provided you don't need a lot of cache warming (which
> requires a trade off in terms of how fast the searches themselves are)
>
> given all that, the one thing you might need to worry about is tweaking
> the merge settings (and maybe the MergeScheduler) to ensure that your
> pathalogical worst case scenerio (ie: a really big merge) doens't block
> your commits.
>
>
>
>
> -Hoss
>
>


Lucandra - Lucene/Solr on Cassandra: April 26, NYC

2010-04-22 Thread Otis Gospodnetic
Hello folks,

Those of you in or near NYC and using Lucene or Solr should come to "Lucandra - 
a Cassandra-based backend for Lucene and Solr" on April 26th:

http://www.meetup.com/NYC-Search-and-Discovery/calendar/12979971/

The presenter will be Lucandra's author, Jake Luciani.

Please spread the word.

Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/


Re: I have a big problem with pagination using apache solr and haystack

2010-04-22 Thread Ariel
I supposed the haystack make that for me, what I need is to know where the
solr save the index to chekc if the documents are repeated in the index, I
am working in ubuntu, could somebody tell me where it is ??? This is a basic
question but I am newbie in solr.
I hope you can help me ¿
Regards
Ariel



On Tue, Apr 20, 2010 at 3:03 PM, Israel Ekpo  wrote:

> I hear this sort of complaint frequently.
>
> Make ensure you did not forget to send a commit request after deleting any
> documents you have removed.
>
> Until the commit request is made those deletes are not yet finalized and
> the
> removed documents will still show up
>
> On Tue, Apr 20, 2010 at 2:37 PM, MitchK  wrote:
>
> >
> > Hi Isaac,
> >
> > how did you implement pagination in Solr? What did you do there?
> > Did you ever had a look at your index with q=*:*?
> > Maybe you've forgotten to delete some news while testing your application
> > and so there are some duplicates.
> >
> > Another thing is: If you have got only 20 news and Solr seems to have 40
> > you
> > should be able to find those which are doubled. If not - don't change
> > anything, try to find a corporation with a lot of money and declare "I've
> > got an application which writes its own news - artificial intelligence?
> > Here
> > you are!" :).
> >
> > Hope this helps
> > - Mitch
> > --
> > View this message in context:
> >
> http://n3.nabble.com/I-have-a-big-problem-with-pagination-using-apache-solr-and-haystack-tp732572p733115.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
>
>
> --
> "Good Enough" is not good enough.
> To give anything less than your best is to sacrifice the gift.
> Quality First. Measure Twice. Cut Once.
> http://www.israelekpo.com/
>


Storng Solr index in Cassandra

2010-04-22 Thread Andy
Lucandra stores Solr index in Cassandra. What is the advantage of that compared 
to regular Solr?

Anyone with experience with Lucandra Solr they can share?


  


Re: LucidWorks Solr

2010-04-22 Thread Robert Muir
On Wed, Apr 21, 2010 at 1:38 PM, Shashi Kant  wrote:

> Why do these approaches have to be mutually exclusive?
> Do a dictionary lookup, if no satisfactory match found use an
> algorithmic stemmer. Would probably save a few CPU cycles by
> algorithmic stemming iff necessary.
>
>
by the way, if you want to do this, you can do it easily in Solr trunk. Just
put a StemmerOverrideFilterFactory in front of your stemmer, containing
tab-separated dictionary-word stem mappings. In the test-files directory is
an example of this (stemdict.txt):

# test that we can override the stemming algorithm with our own mappings
# these must be tab-separated
monkeysmonkey
ottersotter
# some crazy ones that a stemmer would never do
dogscat

You can use this factory, or the new KeywordMarkerFilterFactory, which is
similar but simply takes a text file like protwords.txt, for the stemmer to
ignore.
Both of these filters set a special attribute for this token in the
tokenstream that all stemmers respect, and they won't do any stemming on
this token

-- 
Robert Muir
rcm...@gmail.com


Re: Getting the character offset from highlighted fragments

2010-04-22 Thread Simon Wistow
On Thu, Apr 22, 2010 at 02:15:08AM +0100, me said:
> It looks like org.apache.lucene.search.highlight.TextFragment has the 
> right information to do this (i.e textStartPos)

Turns out that it doesn't seem to have the right information in that 
textStartPos always seems to be 0 (and textEndPos just seems to be the 
lenght of the fragment).

Any suggestions?




Boosting by rating using function queries?

2010-04-22 Thread Jason Rutherglen
I have an int ratings field that I want to boost from within the
query. So basically want to use the
http://wiki.apache.org/solr/FunctionQuery#scale function to
scale the ratings to values 1..5, then within the actual query
(or otherwise), boost the scaled rating value. How would I go
about doing this?


DIH: inner select fails when outter entity is null/empty

2010-04-22 Thread Otis Gospodnetic
Hello,

Here is a newbie DataImportHandler question:

Currently, I have entities with entities.  There are some 
situations where a column value from the outer entity is null, and when I try 
to use it in the inner entity, the null just gets replaced with an 
empty string.  That in turn causes the SQL query in the inner entity to 
fail.

This seems like a common problem, but I couldn't find any solutions or mention 
in the FAQ ( http://wiki.apache.org/solr/DataImportHandlerFaq )

What is the best practice to avoid or convert null values to something safer?  
Would 
this be done via a Transformer or is there a better mechanism for this?

I think the problem I'm describing is similar to what was described here:  
http://search-lucene.com/m/cjlhtFkG6m
... except I don't have the luxury of rewriting the SQL selects.

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



Best way to prevent this search lockup (apparently caused during big segment merges)?

2010-04-22 Thread Chris Harris
I'm running Solr 1.4+ under Tomcat 6, with indexing and searching
requests simultaneously hitting the same Solr machine. Sometimes Solr,
Tomcat, and my (C#) indexing process conspire to render search
inoperable. So far I've only noticed this while big segment merges
(i.e. merges that take multiple minutes) are taking place.

Let me explain the situation as best as I understand it.

My indexer has a main loop that looks roughly like this:

  while true:
try:
  submit a new add or delete request to Solr via HTTP
catch timeoutException:
  sleep a few seconds

When things are going wrong (i.e., when a large segment merge is
happening), this loop is problematic:

* When the indexer's request hits Solr, then the corresponding thread
in Tomcat blocks. (It looks to me like the thread is destined to block
until the entire merge is complete. I'll paste in what the Java stack
traces look like at the end of the message if they can help diagnose
things.)
* Because the Solr thread stays blocked for so long, eventually the
indexer hits a timeoutException. (That is, it gives up on Solr.)
* Hitting the timeout exception doesn't cause the corresponding Tomcat
thread to die or unblock. Therefore, each time through the loop,
another Solr-handling thread inside Tomcat enters a blocked state.
* Eventually so many threads (maxThreads, whose Tomcat default is 200)
are blocked that Tomcat starts rejecting all new Solr HTTP requests --
including those coming in from the web tier.
* Users are unable to search. The problem might self-correct once the
merge is complete, but that could be quite a while.

What are my options for changing Solr settings or changing my indexing
process to avoid this lockup scenario? Do you agree that the segment
merge is helping cause the lockup? Do adds and deletes really need to
block on segment merges?

Partial thread dumps follow, showing example add and delete threads
that are blocked. Also the active Lucene Merge Thread, and the thread
that kicked off the merge.

[doc deletion thread, waiting for DirectUpdateHandler2.iwCommit.lock()
to return]
"http-1800-200" daemon prio=6 tid=0x0a58cc00 nid=0x1028
waiting on condition [0x0f9ae000..0x0f9afa90]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00016d801ae0> (a
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(Unknown Source)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
Source)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown
Source)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown
Source)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(Unknown
Source)
at 
org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:320)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:71)
at org.apache.solr.handler.XMLLoader.processDelete(XMLLoader.java:234)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:180)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
at java.lang.Thread.run(Unknown Source)

[doc adding thread, waiting for DirectUpdateHandler2.iwAccess.lock() to return]
"http-1800-70" daemon prio=

Re: Storng Solr index in Cassandra

2010-04-22 Thread Otis Gospodnetic
Andy,

have a look at:

  http://blog.sematext.com/2010/02/09/lucandra-a-cassandra-based-lucene-backend/

 
"One of the big differentiators of Cassandra is it does not rely on a global 
file system as Hbase and BigTable do.  Rather, Cassandra uses 
decentralized peer to peer “Gossip” which means two things:
1. It has no single point of failure, and
2. Adding nodes to the cluster is as simple as pointing it to any one 
live nodeCassandra also has built-in multi-master writes, replication, rack 
awareness, and can handle downed nodes gracefully."

I'll have the video of 
http://www.meetup.com/NYC-Search-and-Discovery/calendar/12979971/ up by the end 
of 2010, I promise! :)

Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



- Original Message 
> From: Andy 
> To: solr-user@lucene.apache.org
> Sent: Thu, April 22, 2010 1:33:53 PM
> Subject: Storng Solr index in Cassandra
> 
> Lucandra stores Solr index in Cassandra. What is the advantage of that 
> compared 
> to regular Solr?

Anyone with experience with Lucandra Solr they can 
> share?


Re: Solr associations

2010-04-22 Thread Otis Gospodnetic
Restarting the core should be enough.

 Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



- Original Message 
> From: MitchK 
> To: solr-user@lucene.apache.org
> Sent: Thu, April 22, 2010 9:19:46 AM
> Subject: Re: Solr associations
> 
> 
Hi Tom,

why not defining three fields in your schema.xml?

One 
> field is the name-field (e.g. Apache or McDonald's) and another field is
the 
> category-name. The last field contains on the category id.

If you define 
> a synonymFilter for query-time-filtering, you don't need to
reindex the whole 
> index. Unfortunately you need to restart your
Solr-instance (or was it only 
> the core? I don't know...) to make recently
added synonyms available at 
> search-time. 

If you set expand = true, a query like "restaurant" could 
> also match fast
food.

The synonym.txt for this usecase looks 
> like:
restaurant => fast food, restaurant

Hope this helps
- 
> Mitch
-- 
View this message in context: 
> href="http://lucene.472066.n3.nabble.com/Solr-associations-tp742443p742837.html";
>  
> target=_blank 
> >http://lucene.472066.n3.nabble.com/Solr-associations-tp742443p742837.html
Sent 
> from the Solr - User mailing list archive at Nabble.com.


Testing with Jetty, how to obtain the SolrContainer?

2010-04-22 Thread Jason Rutherglen
I'm trying to write a unit test that uses the embedded Jetty server,
and I need to obtain the CoreContainer that the SolrDispatchFilter
instantiates.  Is there a way to do this?


Re: Retrieve time of last optimize

2010-04-22 Thread Jon Baer
I don't think there is anything low level in Lucene that will specifically 
output anything like lastOptimized() to you, since it can be setup a few ways.  

Your best bet is probably adding a postOptimize hook and dumping it to log / 
file / monitor / etc, probably something like ...


  lastOptimize.sh
  solr/bin
  true

 
Or writing to a file and reading it back into the admin if you need to display 
it there.

More @ http://wiki.apache.org/solr/SolrConfigXml#Update_Handler_Section

- Jon

On Apr 22, 2010, at 11:16 AM, Shawn Heisey wrote:

> On 4/21/2010 1:24 PM, Shawn Heisey wrote:
>> Is it possible to issue some kind of query to a Solr core that will return 
>> the last time the index was optimized?  Every day, one of my shards should 
>> get optimized, so I would like my monitoring system to tell me when the 
>> newest optimize date is more than 24 hours ago.  I could not find a way to 
>> get this.  The /admin/cores page has a lot of other useful information, but 
>> not that particular piece.
> 
> I have found some other useful information on the stats.jsp page, like the 
> number of segments, the size of the index on disk, and so on.  Still have not 
> been able to locate the last optimize date, which would simply be the 
> timestamp on the earliest disk segment.
> 
> Thanks,
> Shawn
>