tracking solr response time

2009-11-02 Thread bharath venkatesh
Hi,

We are using solr for many of ur products  it is doing quite well
.  But since no of hits are becoming high we are experiencing latency
in certain requests ,about 15% of our requests are suffering a latency
 . We are trying to identify  the problem .  It may be due to  network
issue or solr server is taking time to process the request  .   other
than  qtime which is returned along with the response is there any
other way to track solr servers performance ?  how is qtime calculated
, is it the total time from when solr server got the request till it
gave the response ? can we do some extra logging to track solr servers
performance . ideally I would want to pass some log id along with the
request (query ) to  solr server  and solr server must log the
response time along with that log id .

Thanks in advance ..
Bharath


Re: tracking solr response time

2009-11-02 Thread bharath venkatesh
Thanks for the quick response
@yonik

>How much of a latency compared to normal, and what version of Solr are
you using?

latency is usually around 2-4 secs (some times it goes more than that
)  which happens  to  only 15-20%  of the request  other  80-85% of
request are very fast it is in  milli secs ( around 200,000 requests
happens every day )

@Israel  we are not using java client ..  we  r using  python at the
client with response formatted in json

@yonikn @Israel   does qtime measure the total time taken at the solr
server ? I am already measuring the time to get the response  at
client  end . I would want  a means to know how much time the solr
server is taking to respond (process ) once it gets the request  . so
that I could identify whether it is a solr server issue or internal
network issue


@Israel  we are using rhel server  5 on both client and server .. we
have 6 solr sever . one is acting as master . both client and solr
sever are on the same network . those servers are dedicated solr
server except 2 severs which have DB and memcahce running .. we have
adjusted the load accordingly







On 11/2/09, Israel Ekpo  wrote:
> On Mon, Nov 2, 2009 at 8:41 AM, Yonik Seeley
> wrote:
>
>> On Mon, Nov 2, 2009 at 8:13 AM, bharath venkatesh
>>  wrote:
>> >We are using solr for many of ur products  it is doing quite well
>> > .  But since no of hits are becoming high we are experiencing latency
>> > in certain requests ,about 15% of our requests are suffering a latency
>>
>> How much of a latency compared to normal, and what version of Solr are
>> you using?
>>
>> >  . We are trying to identify  the problem .  It may be due to  network
>> > issue or solr server is taking time to process the request  .   other
>> > than  qtime which is returned along with the response is there any
>> > other way to track solr servers performance ?
>> > how is qtime calculated
>> > , is it the total time from when solr server got the request till it
>> > gave the response ?
>>
>> QTime is the time spent in generating the in-memory representation for
>> the response before the response writer starts streaming it back in
>> whatever format was requested.  The stored fields of returned
>> documents are also loaded at this point (to enable handling of huge
>> response lists w/o storing all in memory).
>>
>> There are normally servlet container logs that can be configured to
>> spit out the real total request time.
>>
>> > can we do some extra logging to track solr servers
>> > performance . ideally I would want to pass some log id along with the
>> > request (query ) to  solr server  and solr server must log the
>> > response time along with that log id .
>>
>> Yep - Solr isn't bothered by params it doesn't know about, so just put
>> logid=xxx and it should also be logged with the other request
>> params.
>>
>> -Yonik
>> http://www.lucidimagination.com
>>
>
>
>
> If you are not using Java then you may have to track the elapsed time
> manually.
>
> If you are using the SolrJ Java client you may have the following options:
>
> There is a method called getElapsedTime() in
> org.apache.solr.client.solrj.response.SolrResponseBase which is available to
> all the subclasses
>
> I have not used it personally but I think this should return the time spent
> on the client side for that request.
>
> The QTime is not the time on the client side but the time spent internally
> at the Solr server to process the request.
>
> http://lucene.apache.org/solr//api/solrj/org/apache/solr/client/solrj/response/SolrResponseBase.html
>
> http://lucene.apache.org/solr//api/solrj/org/apache/solr/client/solrj/response/QueryResponse.html
>
> Most likely it could be as a result of an internal network issue between the
> two servers or the Solr server is competing with other applications for
> resources.
>
> What operating system is the Solr server running on? Is you client
> application connection to a Solr server on the same network or over the
> internet? Are there other applications like database servers etc running on
> the same machine? If so, then the DB server (or any other application) and
> the Solr server could be competing for resources like CPU, memory etc.
>
> If you are using Tomcat, you can take a look in
> $CATALINA_HOME/logs/catalina.out, there are timestamps there that can also
> guide you.
>
> --
> "Good Enough" is not good enough.
> To give anything less than your best is to sacrifice the gift.
> Quality First. Measure Twice. Cut Once.
>


Re: tracking solr response time

2009-11-02 Thread bharath venkatesh
@Israel: yes I got that point which yonik mentioned .. but is qtime the
total time taken by solr server for that request or  is it  part of time
taken by the solr for that request ( is there any thing that a solr server
does for that particulcar request which is not included in that qtime
bracket ) ?  I am sorry for dragging in to this qtime. I just want to be
sure, as we observed many times there is huge mismatch between qtime and
time measured at the client for the response ( does this imply it is due to
internal network issue )

@Erick: yes, many times query is slow first time its executed is there any
solution to improve upon this factor .. for querying we use
DisMaxRequestHandler , queries are quite long with many faceting parameters
.


On Mon, Nov 2, 2009 at 10:46 PM, Israel Ekpo  wrote:

> On Mon, Nov 2, 2009 at 9:52 AM, bharath venkatesh <
> bharathv6.proj...@gmail.com> wrote:
>
> > Thanks for the quick response
> > @yonik
> >
> > >How much of a latency compared to normal, and what version of Solr are
> > you using?
> >
> > latency is usually around 2-4 secs (some times it goes more than that
> > )  which happens  to  only 15-20%  of the request  other  80-85% of
> > request are very fast it is in  milli secs ( around 200,000 requests
> > happens every day )
> >
> > @Israel  we are not using java client ..  we  r using  python at the
> > client with response formatted in json
> >
> > @yonikn @Israel   does qtime measure the total time taken at the solr
> > server ? I am already measuring the time to get the response  at
> > client  end . I would want  a means to know how much time the solr
> > server is taking to respond (process ) once it gets the request  . so
> > that I could identify whether it is a solr server issue or internal
> > network issue
> >
>
> It is the time spent at the Solr server.
>
> I think Yonik already answered this part in his response to your thread :
>
> This is what he said :
>
> QTime is the time spent in generating the in-memory representation for
> the response before the response writer starts streaming it back in
> whatever format was requested.  The stored fields of returned
> documents are also loaded at this point (to enable handling of huge
> response lists w/o storing all in memory).
>
>
> >
> > @Israel  we are using rhel server  5 on both client and server .. we
> > have 6 solr sever . one is acting as master . both client and solr
> > sever are on the same network . those servers are dedicated solr
> > server except 2 severs which have DB and memcahce running .. we have
> > adjusted the load accordingly
> >
> >
> >
> >
> >
> >
> >
> > On 11/2/09, Israel Ekpo  wrote:
> > > On Mon, Nov 2, 2009 at 8:41 AM, Yonik Seeley
> > > wrote:
> > >
> > >> On Mon, Nov 2, 2009 at 8:13 AM, bharath venkatesh
> > >>  wrote:
> > >> >We are using solr for many of ur products  it is doing quite well
> > >> > .  But since no of hits are becoming high we are experiencing
> latency
> > >> > in certain requests ,about 15% of our requests are suffering a
> latency
> > >>
> > >> How much of a latency compared to normal, and what version of Solr are
> > >> you using?
> > >>
> > >> >  . We are trying to identify  the problem .  It may be due to
>  network
> > >> > issue or solr server is taking time to process the request  .
> other
> > >> > than  qtime which is returned along with the response is there any
> > >> > other way to track solr servers performance ?
> > >> > how is qtime calculated
> > >> > , is it the total time from when solr server got the request till it
> > >> > gave the response ?
> > >>
> > >> QTime is the time spent in generating the in-memory representation for
> > >> the response before the response writer starts streaming it back in
> > >> whatever format was requested.  The stored fields of returned
> > >> documents are also loaded at this point (to enable handling of huge
> > >> response lists w/o storing all in memory).
> > >>
> > >> There are normally servlet container logs that can be configured to
> > >> spit out the real total request time.
> > >>
> > >> > can we do some extra logging to track solr servers
> > >> > performance . ideally I would want to pass some log id along with
> the
> > >> > request (query ) to  solr server  and solr server must log the
> >

Re: tracking solr response time

2009-11-03 Thread bharath venkatesh
>I didn't see where you said what Solr version you were using.

below is the solr version info :-
Solr Specification Version: 1.2.2008.07.22.15.48.39
Solr Implementation Version: 1.3-dev
Lucene Specification Version: 2.3.1
Lucene Implementation Version: 2.3.1 629191 - buschmi - 2008-02-19 19:15:48

>this can happen with really big indexes that can't all fit in memory

one of our index is  pretty big its about 16 GB ,  other  indexes are
small (for other applications )  our servers have 32 GB ram

>There are some pretty big concurrency differences between 1.3 and 1.4 too (if 
>your tests involve many concurrent requests).

as I said we observed latency in our live(production) system that is
when we started logging response at the client to identify the problem
, so in our live (production) system there is considerable concurrency
during peak times







On 11/3/09, Yonik Seeley  wrote:
> On Mon, Nov 2, 2009 at 2:21 PM, bharath venkatesh
>  wrote:
>> we observed many times there is huge mismatch between qtime and
>> time measured at the client for the response
>
> Long times to stream back the result to the client could be due to
>  - client not reading fast enough
>  - network congestion
>  - reading the stored fields takes a long time
> - this can happen with really big indexes that can't all fit in
> memory, and stored fields tend to not be cached well by the OS
> (essentially random access patterns over a huge area).  This ends up
> causing a disk seek per document being
> streamed back.
>  - locking contention for reading the index (under Solr 1.3, but not
> under 1.4 on non-windows platforms)
>
> I didn't see where you said what Solr version you were using.  There
> are some pretty big concurrency differences between 1.3 and 1.4 too
> (if your tests involve many concurrent requests).
>
> -Yonik
> http://www.lucidimagination.com
>


Re: tracking solr response time

2009-11-08 Thread bharath venkatesh
Thanks  Lance for the clear explanation .. are you saying we should give
solr JVM enough memory so that os cache can optimize disk I/O efficiently ..
that means in our case we have  16 GB  index so  would it  be enough to
allocated solr JVM 20GB memory and rely on the OS cache to optimize disk I/O
i .e cache the index in memory  ??


below is stats related to cache


*name: * queryResultCache  *class: * org.apache.solr.search.LRUCache  *
version: * 1.0  *description: * LRU Cache(maxSize=512, initialSize=512,
autowarmCount=256,
regenerator=org.apache.solr.search.solrindexsearche...@67e112b3)
*stats: *lookups
: 0
hits : 0
hitratio : 0.00
inserts : 8
evictions : 0
size : 8
cumulative_lookups : 15
cumulative_hits : 7
cumulative_hitratio : 0.46
cumulative_inserts : 8
cumulative_evictions : 0


*name: * documentCache  *class: * org.apache.solr.search.LRUCache  *
version: * 1.0  *description: * LRU Cache(maxSize=512, initialSize=512)  *
stats: *lookups : 0
hits : 0
hitratio : 0.00
inserts : 0
evictions : 0
size : 0
cumulative_lookups : 744
cumulative_hits : 639
cumulative_hitratio : 0.85
cumulative_inserts : 105
cumulative_evictions : 0


*name: * filterCache  *class: * org.apache.solr.search.LRUCache
*version: *1.0
*description: * LRU Cache(maxSize=512, initialSize=512, autowarmCount=256,
regenerator=org.apache.solr.search.solrindexsearche...@1e3dbf67)
*stats: *lookups
: 0
hits : 0
hitratio : 0.00
inserts : 20
evictions : 0
size : 12
cumulative_lookups : 64
cumulative_hits : 60
cumulative_hitratio : 0.93
cumulative_inserts : 12
cumulative_evictions : 0


hits and hit ratio are  zero for ducment cache , filter cache and query
cache ..  only commulative hits and hitratio has a non zero numbers ..  is
this how it is supposed to be .. or do we to configure it properly ?

Thanks,
Bharath





On Sat, Nov 7, 2009 at 5:47 AM, Lance Norskog  wrote:

> The OS cache is the memory used by the operating system (Linux or
> Windows) to store a cache of the data stored on the disk. The cache is
> usually by block numbers and are not correlated to files. Disk blocks
> that are not used by programs are slowly pruned from the cache.
>
> The operating systems are very good at maintaining this cache. It
> usually better to give the Solr JVM enough memory to run comfortably
> and rely on the OS cache to optimize disk I/O, instead of giving it
> all available ram.
>
> Solr has its own caches for certain data structures, and there are no
> solid guidelines for tuning those. The solr/admin/stats.jsp page shows
> the number of hits & deletes for the caches and most people just
> reload that over & over.
>
> On Fri, Nov 6, 2009 at 3:09 AM, bharath venkatesh
>  wrote:
> >>I have to state the obvious: you may really want to upgrade to 1.4 when
> > it's out
> >
> > when would solr 1.4 be released .. is there any beta version available ?
> >
> >>We don't have the details, but a machine with 32 GB RAM and 16 GB index
> > should have the whole index cached by >the OS
> >
> > do we have to configure solr  for the index to be cached  by OS in a
> > optimised way   . how does this caching of index in memory happens ?  r
> > there  any docs or link which gives details regarding the same
> >
> >>unless something else is consuming the memory or unless something is
> > constantly throwing data out of the OS >cache (e.g. frequent index
> > optimization).
> >
> > what are the factors which would cause constantly throwing data out of
> the
> > OS cache  (we are doing  index optimization only once in a day during
> > midnight )
> >
> >
> > Thanks,
> > Bharath
> >
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>


Re: tracking solr response time

2009-11-10 Thread bharath venkatesh
Otis,

   This means we have to leave enough space for os cache to cache the whole
index  . so In  case of 16 GB index ., if  I am not wrong at least 16 GB
memory must not be   allocated to any application for os cache to utilize
the memory .

>> The operating systems are very good at maintaining this cache. It
> > usually better to give the Solr JVM enough memory to run comfortably
> > and rely on the OS cache to optimize disk I/O, instead of giving it
> > all available ram.

how much ram would be good enough for the Solr JVM  to run comfortably.


thanks,
Bharath


On Tue, Nov 10, 2009 at 3:59 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:

> Bharat,
>
> No, you should not give the JVM so much memory.  Give it enough to avoid
> overly frequent GC, but don't steal memory from the OS cache.
>
> Otis
> --
> Sematext is hiring -- http://sematext.com/about/jobs.html?mls
> Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
>
>
>
> - Original Message 
> > From: bharath venkatesh 
> > To: solr-user@lucene.apache.org
> > Sent: Sun, November 8, 2009 2:15:00 PM
> > Subject: Re: tracking solr response time
> >
> > Thanks  Lance for the clear explanation .. are you saying we should give
> > solr JVM enough memory so that os cache can optimize disk I/O efficiently
> ..
> > that means in our case we have  16 GB  index so  would it  be enough to
> > allocated solr JVM 20GB memory and rely on the OS cache to optimize disk
> I/O
> > i .e cache the index in memory  ??
> >
> >
> > below is stats related to cache
> >
> >
> > *name: * queryResultCache  *class: * org.apache.solr.search.LRUCache  *
> > version: * 1.0  *description: * LRU Cache(maxSize=512, initialSize=512,
> > autowarmCount=256,
> > regenerator=org.apache.solr.search.solrindexsearche...@67e112b3)
> > *stats: *lookups
> > : 0
> > hits : 0
> > hitratio : 0.00
> > inserts : 8
> > evictions : 0
> > size : 8
> > cumulative_lookups : 15
> > cumulative_hits : 7
> > cumulative_hitratio : 0.46
> > cumulative_inserts : 8
> > cumulative_evictions : 0
> >
> >
> > *name: * documentCache  *class: * org.apache.solr.search.LRUCache  *
> > version: * 1.0  *description: * LRU Cache(maxSize=512, initialSize=512)
>  *
> > stats: *lookups : 0
> > hits : 0
> > hitratio : 0.00
> > inserts : 0
> > evictions : 0
> > size : 0
> > cumulative_lookups : 744
> > cumulative_hits : 639
> > cumulative_hitratio : 0.85
> > cumulative_inserts : 105
> > cumulative_evictions : 0
> >
> >
> > *name: * filterCache  *class: * org.apache.solr.search.LRUCache
> > *version: *1.0
> > *description: * LRU Cache(maxSize=512, initialSize=512,
> autowarmCount=256,
> > regenerator=org.apache.solr.search.solrindexsearche...@1e3dbf67)
> > *stats: *lookups
> > : 0
> > hits : 0
> > hitratio : 0.00
> > inserts : 20
> > evictions : 0
> > size : 12
> > cumulative_lookups : 64
> > cumulative_hits : 60
> > cumulative_hitratio : 0.93
> > cumulative_inserts : 12
> > cumulative_evictions : 0
> >
> >
> > hits and hit ratio are  zero for ducment cache , filter cache and query
> > cache ..  only commulative hits and hitratio has a non zero numbers ..
>  is
> > this how it is supposed to be .. or do we to configure it properly ?
> >
> > Thanks,
> > Bharath
> >
> >
> >
> >
> >
> > On Sat, Nov 7, 2009 at 5:47 AM, Lance Norskog wrote:
> >
> > > The OS cache is the memory used by the operating system (Linux or
> > > Windows) to store a cache of the data stored on the disk. The cache is
> > > usually by block numbers and are not correlated to files. Disk blocks
> > > that are not used by programs are slowly pruned from the cache.
> > >
> > > The operating systems are very good at maintaining this cache. It
> > > usually better to give the Solr JVM enough memory to run comfortably
> > > and rely on the OS cache to optimize disk I/O, instead of giving it
> > > all available ram.
> > >
> > > Solr has its own caches for certain data structures, and there are no
> > > solid guidelines for tuning those. The solr/admin/stats.jsp page shows
> > > the number of hits & deletes for the caches and most people just
> > > reload that over & over.
> > >
> > > On Fri, Nov 6, 2009 at 3:09 AM, bharath venkatesh
> > > wrote:
> > > >>I have to state the obvious: you may really want to upgrade to

Re: tracking solr response time

2009-11-10 Thread bharath venkatesh
Thanks yonik .. will consider Jconsole

On Tue, Nov 10, 2009 at 7:01 PM, Yonik Seeley wrote:

> On Tue, Nov 10, 2009 at 8:07 AM, bharath venkatesh
>  wrote:
> > how much ram would be good enough for the Solr JVM  to run comfortably.
>
> It really depends on how much stuff is cached, what fields you facet
> and sort on, etc.
>
> It can be easier to measure than to try and calculate it.
> Run jconsole to see the memory use, do a whole bunch of queries that
> do all the faceting, sorting, and function queries you will do in
> production.  Then invoke GC a few times in rapid succession via
> jconsole and see how much memory is actually used.  Double that to
> account for a new index searcher being opened while the current one is
> still open (that's just the worst case for Solr 1.4... the average
> reopen case is better since many segments can be shared).  Add a
> little more for safety.
>
> -Yonik
> http://www.lucidimagination.com
>


understanding how solr/lucene handles a select query (to analyze where solr/lucene is taking time )

2009-11-10 Thread bharath venkatesh
Hi,
As mentioned in my previous post
  ,
we are experiencing a delay (latency ) for 15 % of the request to solr .
delay is about 2-4 sec sometimes it even reaches 10 sec (noticed from apache
tomcat logs where solr is running , so internal  network issue ruled out).
So to fix the roblem we need to analyze where  solr/lucene is  taking time ,
for that  we need to understand how solr/lucene handles a select query
(what are the methods  being used )   . is there any doc or link which
explains the same in detail ?  . We are planning to change the source code
to log the time each method takes while solr handles a request so that we
can analyze where  solr/lucene is  taking time. I am not sure if this is the
right way (unless if this is the only way  ) . Is there any other way to
analyze where  solr/lucene is  taking time ?

so we need to know two  things :
 1.how solr/lucene handles a select query (link or doc will do ) ?
 2. any way to  anaylse where solr/lucene is taking time  ?

Thanks in Advance,
Bharath


latency in solr response is observed after index is updated

2009-12-01 Thread bharath venkatesh

Hi,

We are observing latency (some times huge latency upto 10-20 secs) 
in solr response  after index is updated . whats the reason of this 
latency and how can it be minimized ? 


Note: our index size is pretty large.

any help would be appreciated as we largely affected by it

Thanks in advance.
Bharath
This message is intended only for the use of the addressee and may contain information that is privileged, confidential 
and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the 
employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail 
in error, please notify us immediately by return e-mail and delete this e-mail and all attachments from your system.




RE: latency in solr response is observed after index is updated

2009-12-01 Thread Bharath Venkatesh
Hi Kalidoss,
  
   I am not aware of using solr-config for committing the document . 
but I have mentioned below how we update and  commit documents:
 
curl http://solr_url/update --data-binary @feeds.xml -H 
'Content-type:text/xml; charset=utf-8'
curl http://solr_url/update --data-binary '' -H 
'Content-type:text/xml; charset=utf-8'

where feeds.xml contains the document in xml format

we have master and slave replication for solr server.

updates happens in master , snappuller and snapinstaller is run on 
slaves periodically
queries don't happen at master , only happens at slaves

is there any thing which can be said from above information ?

Thanks,
Bharath



-Original Message-
From: kalidoss [mailto:kalidoss.muthuramalin...@sifycorp.com]
Sent: Tue 12/1/2009 2:38 PM
To: solr-user@lucene.apache.org
Subject: Re: latency in solr response  is observed  after index is updated
 
r u using solr-config for committing the document?

bharath venkatesh wrote:
> Hi,
>
> We are observing latency (some times huge latency upto 10-20 secs) 
> in solr response  after index is updated . whats the reason of this 
> latency and how can it be minimized ?
> Note: our index size is pretty large.
>
> any help would be appreciated as we largely affected by it
>
> Thanks in advance.
> Bharath






This message is intended only for the use of the addressee and may contain 
information that is privileged, confidential 
and exempt from disclosure under applicable law. If the reader of this message 
is not the intended recipient, or the 
employee or agent responsible for delivering the message to the intended 
recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this e-mail 
in error, please notify us immediately by return e-mail and delete this e-mail 
and all attachments from your system.


Re: latency in solr response is observed after index is updated

2009-12-04 Thread Bharath Venkatesh

Hi Kay Kay ,
 We have commented out  auto commit frequency in solrconfig.xml

below is the cache configuration:-



   


  


will further requests after index is updated  wait for auto warming to complete 
?

Thanks,
Bharath


Kay Kay wrote:
> What would be the average doc size.  What is the autoCommit frequency set in 
> solrconfig.xml .
>
> Another place to look at is the field cache size and the nature of warmup 
> queries run after a new searcher is created ( happens due to a commit ).
>
>
>
> Bharath Venkatesh wrote:
>> Hi Kalidoss,
>>  I am not aware of using solr-config for committing the document . but I 
>> have mentioned below how we update and  commit documents:
>>  
>> curl http://solr_url/update --data-binary @feeds.xml -H 
>> 'Content-type:text/xml; charset=utf-8'
>> curl http://solr_url/update --data-binary '' -H 
>> 'Content-type:text/xml; charset=utf-8'
>>
>> where feeds.xml contains the document in xml format
>>
>> we have master and slave replication for solr server.
>>
>> updates happens in master , snappuller and snapinstaller is run on slaves 
>> periodically
>> queries don't happen at master , only happens at slaves
>>
>> is there any thing which can be said from above information ?
>>
>> Thanks,
>> Bharath
>>
>>
>>
>> -Original Message-
>> From: kalidoss [mailto:kalidoss.muthuramalin...@sifycorp.com]
>> Sent: Tue 12/1/2009 2:38 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: latency in solr response  is observed  after index is updated
>>  
>> r u using solr-config for committing the document?
>>
>> bharath venkatesh wrote:
>>  
>>> Hi,
>>>
>>> We are observing latency (some times huge latency upto 10-20 secs) in 
>>> solr response  after index is updated . whats the reason of this latency 
>>> and how can it be minimized ?
>>> Note: our index size is pretty large.
>>>
>>> any help would be appreciated as we largely affected by it
>>>
>>> Thanks in advance.
>>> Bharath
>>> 

This message is intended only for the use of the addressee and may contain 
information that is privileged, confidential 
and exempt from disclosure under applicable law. If the reader of this message 
is not the intended recipient, or the 
employee or agent responsible for delivering the message to the intended 
recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this e-mail 
in error, please notify us immediately by return e-mail and delete this e-mail 
and all attachments from your system.


maximum no of values in multi valued string field

2009-12-15 Thread bharath venkatesh

Hi ,
  Is there any limit in no of values stored in a single  multi 
valued string field ? if a single multi valued string field contains 
1000-2000 string values what will be effect on query performance (we 
will be only indexing this field not storing it )  ? is it better to 
store all the strings  in a single  text field instead of multi valued 
string field.


Thanks in Advance,
Bharath
This message is intended only for the use of the addressee and may contain information that is privileged, confidential 
and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the 
employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail 
in error, please notify us immediately by return e-mail and delete this e-mail and all attachments from your system.




score computation for dismax handler

2010-02-18 Thread bharath venkatesh
Hi ,
  When query is made across multiple fields in dismax handler using
paramater qf  , I have observed that with  debug query enabled the resultant
score is max score of scores of query across each  fields . but I want the
resultant score to be sum of score across fields (like the standard handler
) . can any one tell me how this can be achevied.


solr caches from external caching system like memcached

2010-05-20 Thread bharath venkatesh
Hi,

  Is it possible to use solr caches such as query cache , filter cache
and document cache from external caching system   like memcached as it
has several advantages such as centralized caching system  and reducing the
pause time  of JVM 's garbage collection  as we can assign less memory to
jvm .

Thanks,
Bharath