Apologies for late reply, Thanks Toke for a great explaination :)
I am new in solr so i am unaware of DocValues, so please can you explain.
With Regards
Aman Tandon
On Fri, May 2, 2014 at 1:52 PM, Toke Eskildsen wrote:
> On Thu, 2014-05-01 at 23:03 +0200, Aman Tandon wrote:
> > So can you expla
On Thu, 2014-05-01 at 23:03 +0200, Aman Tandon wrote:
> So can you explain how enum is faster than default.
The fundamental difference is than enum iterates terms and counts how
many of the documents associated to the terms are in the hits, while fc
iterates all hits and updates a counter for the
On Thu, 2014-05-01 at 23:38 +0200, Shawn Heisey wrote:
> I was surprised to read that fc uses less memory.
I think that is an error in the documentation. Except for special cases,
such as asking for all facet values on a high cardinality field, I would
estimate that enum uses less memory than fc.
On 5/1/2014 3:03 PM, Aman Tandon wrote:
> Please check that link
> http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
> something mentioned in facet.method wiki
>
> *The default value is fc (except for BoolField which uses enum) since it
> tends to use less memory and is faster
Hi Shawn,
Please check that link
http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
something mentioned in facet.method wiki
*The default value is fc (except for BoolField which uses enum) since it
tends to use less memory and is faster then the enumeration method when a
fiel
On 4/30/2014 5:53 PM, Aman Tandon wrote:
> Shawn -> Yes we have some plans to move to SolrCloud, Our total index size
> is 40GB with 11M of Docs, Available RAM 32GB, Allowed heap space for solr
> is 14GB, the GC tuning parameters using in our server
> is -XX:+UseConcMarkSweepGC -XX:+PrintGCApplicat
Jeff -> Thanks Jeff this discussion on jira is really quite helpful. Thanks
for this.
Shawn -> Yes we have some plans to move to SolrCloud, Our total index size
is 40GB with 11M of Docs, Available RAM 32GB, Allowed heap space for solr
is 14GB, the GC tuning parameters using in our server
is -XX:+U
It¹s not just FacetComponent, here¹s the original feature ticket for
timeAllowed:
https://issues.apache.org/jira/browse/SOLR-502
As I read it, timeAllowed only limits the time spent actually getting
documents, not the time spent figuring out what data to get or how. I
think that means the primar
On 4/29/2014 11:43 PM, Aman Tandon wrote:
> My heap size is 14GB and i am not using solr cloud currently, 40GB index
> is replicated from master to two slaves.
>
> I read somewhere that it return the partial results which is computed by
> the query in that specified amount of time which is define
On Wed, Apr 30, 2014 at 2:16 PM, Aman Tandon wrote:
> name="time">3337.0
> 6739.0
>
Most time is spent in facet counting. FacetComponent doesn't checks
timeAllowed right now. You can try to experiment with facet.method=enum or
even with https://issues.apache.org/jira/b
Hi Salman,
here is the my debug query dump please help!. I am unable to find the
wildcards in it.
true 0 10080 884159 629472 491426 259356
259029 257193 195077
193569 179369 115356
111644 86794 80621 72815 68982
65082
I had this issue too. timeAllowed only works for a certain phase of the
query. I think that's the 'process' part. However, if the query is taking
time in 'prepare' phase (e.g. I think for wildcards to get all the possible
combinations before running the query) it won't have any impact on that.
You
Shawn this is the first time i raised this problem.
My heap size is 14GB and i am not using solr cloud currently, 40GB index
is replicated from master to two slaves.
I read somewhere that it return the partial results which is computed by
the query in that specified amount of time which is defin
On 4/29/2014 10:05 PM, Aman Tandon wrote:
> I am using solr 4.2 with the index size of 40GB, while querying to my index
> there are some queries which is taking the significant amount of time of
> about 22 seconds *in the case of minmatch of 50%*. So i added a parameter
> timeAllowed = 2000 in my q
Hi,
I am using solr 4.2 with the index size of 40GB, while querying to my index
there are some queries which is taking the significant amount of time of
about 22 seconds *in the case of minmatch of 50%*. So i added a parameter
timeAllowed = 2000 in my query but this doesn't seems to be work. Pleas
15 matches
Mail list logo