We figured out that if use only shingle field not combined with ouput
Unigram than performance getting better. I f we use output unigram its not
good from the normal index field. so we decide to make separate field
only combined shingle using this field to support main queries.
On Wed, Aug 31, 201
Thanks Erick.. If I figure out something I will let you know also.. No body
replied except you I thought there might be more people involve here..
Thanks
On Wed, Aug 31, 2011 at 3:47 AM, Erick Erickson wrote:
> OK, I'll have to defer because this makes no sense.
> 4+ seconds in the debug compo
OK, I'll have to defer because this makes no sense.
4+ seconds in the debug component?
Sorry I can't be more help here, but nothing really
jumps out.
Erick
On Tue, Aug 30, 2011 at 12:45 PM, Lord Khan Han wrote:
> Below the output of the debug. I am measuring pure solr qtime which show in
> the Q
Below the output of the debug. I am measuring pure solr qtime which show in
the Qtime field in solr xml.
mrank:[0 TO 100]
8584.0
12.0
12.0
0.0
0.0
0.0
0.0
0.0
0.0
8572.0
4480.0
0.0
0.0
41.0
0.0
0.0
4051.0
On Tue, Aug 30, 2011 at 5:38 PM, Erick Erickson wrote:
>
Can we see the output if you specify both
&debugQuery=on&debug=true
the debug=true will show the time taken up with various
components, which is sometimes surprising...
Second, we never asked the most basic question, what are
you measuring? Is this the QTime of the returned response?
(which is th
Hi Eric,
Fields are lazy loading, content stored in solr and machine 32 gig.. solr
has 20 gig heap. There is no swapping.
As you see we have many phrases in the same query . I couldnt find a way to
drop qtime to subsecends. Suprisingly non shingled test better qtime !
On Mon, Aug 29, 2011 at 3:
Oh, one other thing: have you profiled your machine
to see if you're swapping? How much memory are
you giving your JVM? What is the underlying
hardware setup?
Best
Erick
On Mon, Aug 29, 2011 at 8:09 AM, Erick Erickson wrote:
> 200K docs and 36G index? It sounds like you're storing
> your documen
200K docs and 36G index? It sounds like you're storing
your documents in the Solr index. In and of itself, that
shouldn't hurt your query times, *unless* you have
lazy field loading turned off, have you checked that
lazy field loading is enabled?
Best
Erick
On Sun, Aug 28, 2011 at 5:30 AM, Lord
Another insteresting thing is : all one word or more word queries including
phrase queries such as "barack obama" slower in shingle configuration. What
i am doing wrong ? without shingle "barack obama" Querytime 300ms with
shingle 780 ms..
On Sat, Aug 27, 2011 at 7:58 PM, Lord Khan Han wrote:
Hi,
What is the difference between solr 3.3 and the trunk ?
I will try 3.3 and let you know the results.
Here the search handler:
explicit
10
mrank:[0 TO 100]
explicit
10
edismax
title^1.05 url^1.2 content^1.7 m_title^10.0
content^18.0 m_
I'm not sure what the issue could be at this point. I see you've got
qt=search - what's the definition of that request handler?
What is the parsed query (from the debugQuery response)?
Have you tried this with Solr 3.3 to see if there's any appreciable difference?
Erik
On Aug 27, 201
When grouping off the query time ie 3567 ms to 1912 ms . Grouping
increasing the query time and make useless to cache. But same config faster
without shingle still.
We have and head to head test this wednesday tihs commercial search engine.
So I am looking for all suggestions.
On Sat, Aug 27,
Please confirm is this is caused by grouping. Turn grouping off, what's query
time like?
On Aug 27, 2011, at 07:27 , Lord Khan Han wrote:
> On the other hand We couldnt use the cache for below types queries. I think
> its caused from grouping. Anyway we need to be sub second without cache.
>
On the other hand We couldnt use the cache for below types queries. I think
its caused from grouping. Anyway we need to be sub second without cache.
On Sat, Aug 27, 2011 at 2:18 PM, Lord Khan Han wrote:
> Hi,
>
> Thanks for the reply.
>
> Here the solr log capture.:
>
> **
>
> hl.fragsize=1
Hi,
Thanks for the reply.
Here the solr log capture.:
**
hl.fragsize=100&spellcheck=true&spellcheck.q=X&group.limit=5&hl.simple.pre=&hl.fl=content&spellcheck.collate=true&wt=javabin&hl=true&rows=20&version=2&fl=score,approved,domain,host,id,lang,mimetype,title,tstamp,url,category&hl.snip
On Aug 26, 2011, at 17:49 , Lord Khan Han wrote:
> We are indexing news document from the various sites. Currently we have
> 200K docs indexed. Total index size is 36 gig. There is also attachement to
> the news (pdf -docs etc) So document size could be high (ie 10mb).
>
> We are using some com
16 matches
Mail list logo