Tomcat 6 HTTP Connector Threads all blocked

2009-03-01 Thread Jim Murphy

I have a 100 thread HTTP connector pool that for some reason ends up with all
its threads blocked here:

java.net.SocketOutputStream.socketWrite0(Native Method)
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
java.net.SocketOutputStream.write(SocketOutputStream.java:136)
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:761)
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:570)
org.apache.coyote.Response.doWrite(Response.java:560)
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:309)
org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:273)
org.apache.catalina.connector.Response.finishResponse(Response.java:486)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:287)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
java.lang.Thread.run(Thread.java:619)


Any hints on what to try to diagnose?

Regards

Jim
-- 
View this message in context: 
http://www.nabble.com/Tomcat-6-HTTP-Connector-Threads-all-blocked-tp22274107p22274107.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Tomcat 6 HTTP Connector Threads all blocked

2009-03-01 Thread Jim Murphy

I should have said - tomcat is hosting 2 webapps a solr 1.3 master and slave
- as separate web apps.

Looking for anything to try.

Jim



Jim Murphy wrote:
> 
> I have a 100 thread HTTP connector pool that for some reason ends up with
> all its threads blocked here:
> 
> java.net.SocketOutputStream.socketWrite0(Native Method)
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
> java.net.SocketOutputStream.write(SocketOutputStream.java:136)
> org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
> org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
> org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
> org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:761)
> org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
> org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:570)
> org.apache.coyote.Response.doWrite(Response.java:560)
> org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
> org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
> org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:309)
> org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:273)
> org.apache.catalina.connector.Response.finishResponse(Response.java:486)
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:287)
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
> java.lang.Thread.run(Thread.java:619)
> 
> 
> Any hints on what to try to diagnose?
> 
> Regards
> 
> Jim
> 

-- 
View this message in context: 
http://www.nabble.com/Tomcat-6-HTTP-Connector-Threads-all-blocked-tp22274107p22274129.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Tomcat 6 HTTP Connector Threads all blocked

2009-03-01 Thread Yonik Seeley
On Sun, Mar 1, 2009 at 10:32 AM, Jim Murphy  wrote:
> I should have said - tomcat is hosting 2 webapps a solr 1.3 master and slave
> - as separate web apps.

Given the the socket writes are blocked, it appears like whatever is
supposed to be reading the other endpoint isn't doing it's job.

Are you using java-based replication?  Do you know if these sockets
that are blocking are from client queries or from replication
requests?  Splitting up the master and slave into separate JVMs might
help shed some light on the situation.

-Yonik
http://www.lucidimagination.com


Re: can the TermsComponent be used in combination with fq?

2009-03-01 Thread Chris Hostetter

: http://wiki.apache.org/solr/TermsComponent

: It would seem like this component would be useful for this.  However -
: we often require that some filtering be applied to search results
: based on which user is searching (e.g. public vs. private content).
: Is it possible to apply filtering here, or will we need to do
: something like running a q=*:*&fq=status:1 and then getting facets?

TermComponent uses the raw Term Enumerators - it has no notion of 
documents, so it can't filter.  you'll have to use faceting for that.

: Note - also - the wiki page references a tutorial including this
: /autocomplete path, but I cannot ifnd any trace of such.  I was able

the examples in the wiki actually refer to it as "/autoSuggest" -- which 
is what's in the example solrconfig.xml. 

(FWIW: i've updated hte wiki to remove your note about adding 
"/autocoplete" to the configs since "/autoSuggest" is already there)




-Hoss



Re: 2 strange behaviours with DIH full-import.

2009-03-01 Thread Chris Hostetter

: 2.)I run a full-import and everythins works fine... I run another
: full-import in the same core and everything seems so work find. But I have
: noticed that the index in  /data/index dir is two times bigger. I have seen
: that Solr uses this indexwriter constructor when executes a deleteAll at the
: begining of the full import :
: 
http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/index/IndexWriter.html#IndexWriter(org.apache.lucene.store.Directory,%20org.apache.lucene.analysis.Analyzer,%20boolean,%20org.apache.lucene.index.IndexDeletionPolicy,%20org.apache.lucene.index.IndexWriter.MaxFieldLength)
: 
: Why lucene is not deleteing the data of the old index if the boolean var of
: the constructor is set to true? (the results are not duplicated but
: phisically the directory /index is double size). Has this something to do
: with de deletionPolicy that is saving commits or a lucenes 2.9-dev bug or
: something like that???

this is not unusual, the documents have logically been deleted, but the 
files containing them are still on disk because the "old seracher" is 
still refrencing them, when the "new searcher" is swaped in for hte old 
searcher, those files can be deleted.

on unix filesystems, the old files will actually get deleted immediately 
(even while hte old searcher is still open) becaues unix filesystems let 
you do that.

windows filesystems won't let you delete files while they are open, so 
Lucene keeps track of the fact that the files *can* be deleted, and then 
next time you do a commit, it cleans them up them.



-Hoss



Re: boost qf weight between 0 and 10

2009-03-01 Thread Chris Hostetter

: I don't get really, I try to boost a field according to another one but I've
: a huge weight when I'm using qf boost like :
: 
: /select?qt=dismax&fl=*&q="obama
: meeting"&debugQuery=true&qf=title&bf=product(title,stat_views)

bf is a boost function -- you are using a product fucntion to multiply the 
"title" field by the stat_views" field ... this doesn't make sense to me?

i'm assuming the "title" field contains text (the rest of your score 
explanation confirms this).  when you try to do a math function on a 
string based field it deals with the "ordinal" value -- the higher the 
string is lexigraphically compared to all other docs ,the higher the 
ordinal value.

i have no idea what's in your stat_views field -- but i can't imagine any 
way in which multipling it by the ordinal value of your text field would 
make sense...

:   5803675.5 = (MATCH) FunctionQuery(product(ord(title),sint(stat_views))),
: product of:
: 9.5142968E7 = product(ord(title)=1119329,sint(stat_views)=85)
: 1.0 = boost
: 0.06099952 = queryNorm

: But this is not equilibrate between this boost in qf and bf, how can I do ?

when it comes to function query, you're on your own to figure out an 
appropriate query boost to blanace the scores out -- when you use a 
product function the scores are going to get huge like this unless you 
balance it somehow (and that ord(title) is just making this massively 
worse)


-Hoss



Re: bq type_:true for two types doesn't come up books.

2009-03-01 Thread Chris Hostetter

: So if I do :
: &bq=type_roman:true^1,5+type_comedy:true^1,5
: no video come up 

first off: that syntax doesn't make senes ... those are commas, you need 
decimal points (1.5 not 1,5)

second: i don't really understand your goal, you say you want to boost 
things that match those values, but then you seem dissatisfied that 
documents which don't match those queries aren't showing up high in the 
results.

my best guess is that you want books matching those queries boosted, but 
only over other books, not videos.

try something like this...

(type_roman:true type_comedy:true format:video)^1.5


-Hoss



Re: solr 1.3 - did something with deleting documents change?

2009-03-01 Thread Chris Hostetter

: The notes in the wiki seem to indicate that syntax (with multiple  nodes)
: will be supported in Solr 1.4, not 1.3 - but I guess it really just means that
: you can't combine those with a  node yet.

correct -- the wiki is pointing out htat you can't combine  and 
 deletes until 1.4 -- 1.3 supported multiple  (note CHANGES.txt 
refrences to SOLR-133

: I'll miss the deletesPending stat, I think I'm going to have to re-implement
: this within our application now - I try to block people from committing unless
: there's actually something to commit (and only after certain time interval,
: etc), and the deletesPending stat helped to determine this (maybe you have no
: new documents to index but you do need to commit).  Not a huge deal to do, it
: was just convenient.

a nice feature for solr to have in the core would be to make commit a 
no-op if there are no actaul changes to commit (it would need an option or 
a query param or something in case people are trying to force a postCommit 
hook) ... it's pretty easy for Solr to know this based on the index 
version of hte open reader for commits and the one being used for search.

wanna submit a patch?


-Hoss



Re: 2 strange behaviours with DIH full-import.

2009-03-01 Thread Shalin Shekhar Mangar
On Tue, Feb 17, 2009 at 5:31 PM, Marc Sturlese wrote:

>
> 2.)I run a full-import and everythins works fine... I run another
> full-import in the same core and everything seems so work find. But I have
> noticed that the index in  /data/index dir is two times bigger. I have seen
> that Solr uses this indexwriter constructor when executes a deleteAll at
> the
> begining of the full import :
>
> http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/index/IndexWriter.html#IndexWriter(org.apache.lucene.store.Directory,%20org.apache.lucene.analysis.Analyzer,%20boolean,%20org.apache.lucene.index.IndexDeletionPolicy,%20org.apache.lucene.index.IndexWriter.MaxFieldLength)
>
> Why lucene is not deleteing the data of the old index if the boolean var of
> the constructor is set to true? (the results are not duplicated but
> phisically the directory /index is double size). Has this something to do
> with de deletionPolicy that is saving commits or a lucenes 2.9-dev bug or
> something like that???
>

I think this is due to the IndexDeletionPolicy. The problem is that on a
commit, the IndexWriter is closed. It is re-opened only when you send
another add/delete command. If the index writer is closed, the deletion
policy does not take affect and unused commit points are not marked for
deletion. Replication also hit a similar problem, where the index files on
the slave were not getting cleaned up. The solution is the same, we need to
re-open the index writer after the commit closes it.

I'll open an issue and attach a fix.

-- 
Regards,
Shalin Shekhar Mangar.


Re: Tomcat 6 HTTP Connector Threads all blocked

2009-03-01 Thread Jim Murphy

Thanks Yonik,

1. We're using rsync, snappuller/snapinstaller scripts for syncing masters
and slaves.  

2.  Ran jstack and found 2 kinds of stacks blocke on Socket.write0.  The
first is the one I sent previously - and seems difficult to figure the
request type. This one is serializing a SolrDocumentList as JSON which makes
me believe its a query result on the slave side of things.  Our inserts use
XML but our queries use JSON.

I suppose this question might be better posed on the Tomcat list, but I'm
surprised that the server sockets just block if its misbehaving clients.  We
have ruby processes that are the only clients to Solr, so its possible that
there's bugs in ruby's Net:Http.  But is there something I can enable
server-side to make the server more aggresive on closing connections?  

 
The "other" stack trace:

 - java.net.SocketOutputStream.socketWrite0(java.io.FileDescriptor, byte[],
int, int) @bci=0 (Interpreted frame)
 - java.net.SocketOutputStream.socketWrite(byte[], int, int) @bci=44,
line=92 (Interpreted frame)
 - java.net.SocketOutputStream.write(byte[], int, int) @bci=4, line=136
(Interpreted frame)
 - org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(byte[], int,
int) @bci=11, line=737 (Interpreted frame)
 - org.apache.tomcat.util.buf.ByteChunk.flushBuffer() @bci=71, line=434
(Compiled frame)
 - org.apache.tomcat.util.buf.ByteChunk.append(byte[], int, int) @bci=159,
line=349 (Compiled frame)
 -
org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(org.apache.tomcat.util.buf.ByteChunk,
org.apache.coyote.Response) @bci=31, line=761 (Interpreted frame)
 -
org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(org.apache.tomcat.util.buf.ByteChunk,
org.apache.coyote.Response) @bci=107, line=127 (Interpreted frame)
 -
org.apache.coyote.http11.InternalOutputBuffer.doWrite(org.apache.tomcat.util.buf.ByteChunk,
org.apache.coyote.Response) @bci=49, line=570 (Interpreted frame)
 - org.apache.coyote.Response.doWrite(org.apache.tomcat.util.buf.ByteChunk)
@bci=6, line=560 (Compiled frame)
 - org.apache.catalina.connector.OutputBuffer.realWriteBytes(byte[], int,
int) @bci=38, line=353 (Compiled frame)
 - org.apache.tomcat.util.buf.ByteChunk.append(byte[], int, int) @bci=77,
line=325 (Compiled frame)
 - org.apache.tomcat.util.buf.IntermediateOutputStream.write(byte[], int,
int) @bci=14, line=242 (Compiled frame)
 - sun.nio.cs.StreamEncoder.writeBytes() @bci=120, line=202 (Compiled frame)
 - sun.nio.cs.StreamEncoder.implFlushBuffer() @bci=11, line=272 (Compiled
frame)
 - sun.nio.cs.StreamEncoder.implFlush() @bci=1, line=276 (Compiled frame)
 - sun.nio.cs.StreamEncoder.flush() @bci=12, line=122 (Compiled frame)
 - java.io.OutputStreamWriter.flush() @bci=4, line=212 (Compiled frame)
 - org.apache.tomcat.util.buf.WriteConvertor.flush() @bci=1, line=191
(Compiled frame)
 - org.apache.tomcat.util.buf.C2BConverter.flushBuffer() @bci=4, line=134
(Compiled frame)
 - org.apache.catalina.connector.OutputBuffer.write(char[], int, int)
@bci=22, line=439 (Interpreted frame)
 - org.apache.catalina.connector.CoyoteWriter.write(char[], int, int)
@bci=15, line=143 (Interpreted frame)
 - org.apache.solr.common.util.FastWriter.write(char) @bci=25, line=55
(Compiled frame)
 - org.apache.solr.request.JSONWriter.writeStr(java.lang.String,
java.lang.String, boolean) @bci=60, line=610 (Compiled frame)
 - org.apache.solr.request.TextResponseWriter.writeVal(java.lang.String,
java.lang.Object) @bci=26, line=118 (Compiled frame)
 - org.apache.solr.request.JSONWriter.writeSolrDocument(java.lang.String,
org.apache.solr.common.SolrDocument, java.util.Set, java.util.Map) @bci=148,
line=419 (Compiled frame)
 -
org.apache.solr.request.JSONWriter.writeSolrDocumentList(java.lang.String,
org.apache.solr.common.SolrDocumentList, java.util.Set, java.util.Map)
@bci=240, line=547 (Interpreted frame)
 - org.apache.solr.request.TextResponseWriter.writeVal(java.lang.String,
java.lang.Object) @bci=243, line=147 (Compiled frame)
 -
org.apache.solr.request.JSONWriter.writeNamedListAsMapWithDups(java.lang.String,
org.apache.solr.common.util.NamedList) @bci=70, line=175 (Compiled frame)
 - org.apache.solr.request.JSONWriter.writeNamedList(java.lang.String,
org.apache.solr.common.util.NamedList) @bci=10, line=288 (Interpreted frame)
 - org.apache.solr.request.JSONWriter.writeResponse() @bci=45, line=88
(Interpreted frame)
 - org.apache.solr.request.JSONResponseWriter.write(java.io.Writer,
org.apache.solr.request.SolrQueryRequest,
org.apache.solr.request.SolrQueryResponse) @bci=14, line=49 (Interpreted
frame)
 -
org.apache.solr.servlet.SolrDispatchFilter.doFilter(javax.servlet.ServletRequest,
javax.servlet.ServletResponse, javax.servlet.FilterChain) @bci=767, line=257
(Interpreted frame)
 -
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(javax.servlet.ServletRequest,
javax.servlet.ServletResponse) @bci=117, line=235 (Interpreted frame)
 -
org.apache.catalina.core.ApplicationFilterChain.doFilter(

Re: solr 1.3 - did something with deleting documents change?

2009-03-01 Thread Ryan McKinley


On Feb 28, 2009, at 5:56 PM, Stephen Weiss wrote:


Yeah honestly I don't know how it ever worked either.



my guess is that the XPP parser did not validate anything -- when we  
switched to StAX it validates something...


ryan


Re: Search schema using q Query

2009-03-01 Thread dabboo

Hi Hoss,

Thanks a lot for the information. Here is what I am trying to achieve.

1. I am trying to customize the search with q query parameter, so that it
can support wildcard and field boosting. I customize QueryParser and created
the wildcard query in the same way as it does for non wildcard. But even
with this changed query, the results are not showing up. 

I figured out as how it is doing the field boosting with the scores. But
what I want to know is how it is fetching the records from the indexes based
on the query.

Please suggest how I should go forward.

Thanks,
Amit Garg



hossman wrote:
> 
> : One first step is to use debugQuery=true as an additional parameter to
> your
> : search request.  That'll return debug info in the response, which
> includes a
> : couple of views of the parsed query.
> 
> the query mentioned in the original post appears to come from 
> debugQuery=true using hte dismax parser.
> 
> : > This query is correct and returns the result also. I am looking for
> the
> : > class file, where the actual searching is taking place. I want to see
> as how
> : > it is interpreting the query and how it is returning the result.
> 
> that's kind of a vague question ... the QParser generates 
> the query, the QueryComponent executes it ... using the SOlrIndexSearcher, 
> which delegates to a Lucene IndexSearcher, which delegates back to the 
> Query to generate a Scorer to iterate over matches.
> 
> : > I am trying to customize the searching logic for our specific needs.
> 
> instead of asking which class file interprets the query, perhaps you 
> should tell us what your specific goal is.  what kinds of 
> customizations do you want to make?
> 
> http://people.apache.org/~hossman/#xyproblem
> XY Problem
> 
> Your question appears to be an "XY Problem" ... that is: you are dealing
> with "X", you are assuming "Y" will help you, and you are asking about "Y"
> without giving more details about the "X" so that we can understand the
> full issue.  Perhaps the best solution doesn't involve "Y" at all?
> See Also: http://www.perlmonks.org/index.pl?node_id=542341
> 
> 
> 
> 
> -Hoss
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Search-schema-using-q-Query-tp22218801p22281876.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Trunk Replication Page Issue

2009-03-01 Thread Akshay
Hi Jeff,
The line number from your stacktrace doesn't seem to be valid in the trunk
code (of jsp).

did you do an ant clean dist ?
if yes, can you send me the generated servlet for the jsp?

On Fri, Feb 27, 2009 at 10:17 PM, Jeff Newburn  wrote:

> In trying trunk to fix the Lucene Sync issue we have now encountered a
> severed java exception making the replication page non functional.  Am I
> missing something or doing something wrong?
>
> Info:
> Slave server on the replication page.  Just a code dump as follows.
>
> Feb 27, 2009 8:44:37 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.jasper.JasperException: java.lang.NullPointerException
>at
>
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:4
> 18)
>at
> org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:337)
>at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
>at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Application
> FilterChain.java:290)
>at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterCh
> ain.java:206)
>at
>
> org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.
> java:630)
>at
>
> org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDis
> patcher.java:436)
>at
>
> org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatch
> er.java:374)
>at
>
> org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher
> .java:302)
>at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
> 273)
>at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Application
> FilterChain.java:235)
>at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterCh
> ain.java:206)
>at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.ja
> va:233)
>at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.ja
> va:175)
>at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128
> )
>at
>
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102
> )
>at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java
> :109)
>at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
>at
>
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:
> 879)
>at
>
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(H
> ttp11NioProtocol.java:719)
>at
>
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:
> 2080)
>at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.ja
> va:885)
>at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
> 07)
>at java.lang.Thread.run(Thread.java:619)
> Caused by: java.lang.NullPointerException
>at
> org.apache.jsp.admin.replication.index_jsp._jspService(index_jsp.java:294)
>at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
>at
>
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:3
> 74)
>... 24 more
>
>
> --
> Jeff Newburn
> Software Engineer, Zappos.com
> jnewb...@zappos.com - 702-943-7562
>
>


-- 
Regards,
Akshay K. Ukey.


Search wildcards & field boosting using Q query

2009-03-01 Thread dabboo

Hi,

I am trying to do the wildcard search using the q query parameter with
dismax request. Wildcard search works fine with q.alt parameter but field
boosting doesnt work with q.alt parameter.

I customized the QueryParser class and now the query which is getting formed
for wildcard is exactly the same as without wildcard.

Below is the debug query result for both queries.

Without Wildcard Query

   
- 
- 
  0 
  31 
- 
  10 
  0 
  on 
  test 
  dismaxrequest 
  true 
  2.2 
  
  
- 
- 
  5.906053E-4 
  987654321 
  987654321 
  testData 
  testData 
  
- 
  5.906053E-7 
  123456789 
  123456789 
  testData 
  testData 
  
  
- 
  test 
  test 
  +DisjunctionMaxQuery((programJacketImage_program_s:test |
courseCodeSeq_course_s:test | authorLastName_product_s:test |
Index_Type_s:test | prdMainTitle_s:test^10.0 | discCode_course_s:test |
sourceGroupName_course_s:test | indexType_course_s:test |
prdMainTitle_product_s:test | isbn10_product_s:test |
displayName_course_s:test | groupNm_program_s:test |
discipline_product_s:test | courseJacketImage_course_s:test |
imprint_product_s:test | introText_program_s:test |
productType_product_s:test | isbn13_product_s:test |
copyrightYear_product_s:test | prdPubDate_product_s:test |
programType_program_s:test | editor_product_s:test |
courseType_course_s:test | productURL_s:test^1.0 |
courseId_course_s:test | categoryIds_product_s:test |
indexType_program_s:test | strapline_product_s:test |
subCompany_course_s:test | aluminator_product_s:test | readBy_product_s:test
| subject_product_s:test | edition_product_s:test |
programId_program_s:test)~0.01) () all:english^90.0 all:hindi^123.0
all:glorious^2000.0 all:highlight^1.0E7 all:math^100.0 all:ab^12.0
all:erer^4545.0 MultiPhraseQuery(all:"(prd prd main prd main titl prd main
titl s) (main main titl main titl s) (titl titl s) s"^10.0)
MultiPhraseQuery(all:"(product product url product url s) (url url s)
s"^1.0) 
  +(programJacketImage_program_s:test |
courseCodeSeq_course_s:test | authorLastName_product_s:test |
Index_Type_s:test | prdMainTitle_s:test^10.0 | discCode_course_s:test |
sourceGroupName_course_s:test | indexType_course_s:test |
prdMainTitle_product_s:test | isbn10_product_s:test |
displayName_course_s:test | groupNm_program_s:test |
discipline_product_s:test | courseJacketImage_course_s:test |
imprint_product_s:test | introText_program_s:test |
productType_product_s:test | isbn13_product_s:test |
copyrightYear_product_s:test | prdPubDate_product_s:test |
programType_program_s:test | editor_product_s:test |
courseType_course_s:test | productURL_s:test^1.0 |
courseId_course_s:test | categoryIds_product_s:test |
indexType_program_s:test | strapline_product_s:test |
subCompany_course_s:test | aluminator_product_s:test | readBy_product_s:test
| subject_product_s:test | edition_product_s:test |
programId_program_s:test)~0.01 () all:english^90.0 all:hindi^123.0
all:glorious^2000.0 all:highlight^1.0E7 all:math^100.0 all:ab^12.0
all:erer^4545.0 all:"(prd prd main prd main titl prd main titl s) (main main
titl main titl s) (titl titl s) s"^10.0 all:"(product product url product
url s) (url url s) s"^1.0 
- 
  5.906053E-4 = (MATCH) sum of: 5.906053E-4 = (MATCH)
max plus 0.01 times others of: 5.906053E-4 = (MATCH)
weight(productURL_s:test^1.0 in 1), product of: 5.906053E-4 =
queryWeight(productURL_s:test^1.0), product of: 1.0 = boost 1.0 =
idf(docFreq=1, numDocs=2) 5.906053E-8 = queryNorm 1.0 = (MATCH)
fieldWeight(productURL_s:test in 1), product of: 1.0 =
tf(termFreq(productURL_s:test)=1) 1.0 = idf(docFreq=1, numDocs=2) 1.0 =
fieldNorm(field=productURL_s, doc=1) 
  5.906053E-7 = (MATCH) sum of: 5.906053E-7 = (MATCH)
max plus 0.01 times others of: 5.906053E-7 = (MATCH)
weight(prdMainTitle_s:test^10.0 in 0), product of: 5.906053E-7 =
queryWeight(prdMainTitle_s:test^10.0), product of: 10.0 = boost 1.0 =
idf(docFreq=1, numDocs=2) 5.906053E-8 = queryNorm 1.0 = (MATCH)
fieldWeight(prdMainTitle_s:test in 0), product of: 1.0 =
tf(termFreq(prdMainTitle_s:test)=1) 1.0 = idf(docFreq=1, numDocs=2) 1.0 =
fieldNorm(field=prdMainTitle_s, doc=0) 
  
  DismaxQParser 
   
- 
  english^90 hindi^123 Glorious^2000 highlighting^1000 maths^100
ab^12 erer^4545 prdMainTitle_s^10.0 productURL_s^1.0 
  
- 
  all:english^90.0 all:hindi^123.0 all:glorious^2000.0
all:highlight^1.0E7 all:math^100.0 all:ab^12.0 all:erer^4545.0
MultiPhraseQuery(all:"(prd prd main prd main titl prd main titl s) (main
main titl main titl s) (titl titl s) s"^10.0) MultiPhraseQuery(all:"(product
product url product url s) (url url s) s"^1.0) 
  
- 
   
  
- 
  31.0 
- 
  15.0 
- 
  15.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
  
- 
  16.0 
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  0.0 
  
- 
  16.0 
  
  
  
  
  



With WildCard query

   
- 
- 
  0 
  31 
- 
  10 
  0 
  on 
  tes* 
  dismaxrequest 
  true 
  2.2 
  
  
   
- 
  tes* 
  tes* 
  +DisjunctionMaxQuery((programJacketImage_pr

Is it possible to modiying the indexed values?

2009-03-01 Thread RaghavPrabhu

Hi all,

 I am using Solr1.3 for indexing the file and its properties is to be
working fine. I want to modify the some properties in the particular
document. 

Let say, "sample1.doc". Its properties are File content-type,File Size,File
Modified-Date etc. I had done the index for this file with its properties. I
want to change the Modified-Date value without re-indexing the document. Is
it possible?

Best solution would be more appreciate...


Thanks in advance
Prabhu.K



-- 
View this message in context: 
http://www.nabble.com/Is-it--possible-to-modiying-the-indexed-values--tp22283132p22283132.html
Sent from the Solr - User mailing list archive at Nabble.com.