Deepak:
I would strongly urge you to consider changing your problem solution
to _not_ need 35,000 fields. What that usually indicates is that there
are much better ways of tackling the problem. As Shawn says, 35,000
fields won't make much difference for an individual search. But 35,000
fields _do_
bq. does it make even sense to cache anything
In a word, "no". Now only would I set the cache entry size to zero,
I'd form my filer queries with {!cache=false}... There's no particular
point in computing the entire cache entry in this case, possibly even
with a cost=101. See:
http://yonik.com/adva
On 5/11/2018 11:20 AM, tayitu wrote:
> I am using Solr 6.6.0. I have created collection and uploaded the config
> files to zookeeper. I can see the collection and config files from Solr
> Admin UI. When I try to Dataimport, I get the following error:
>
> ZKPropertiesWriter Could not read DIH pro
hey shawn i tried debugging actual solr code in local with the following two
different forms for frange. So to see if solr is somehow parsing it wrong.
But i seed the parsed query that gets put in the filter query pretty much
same.
query1 -> +_val_:{!frange cost=200 l=30 u=100 incl=true incu=false
Hi all,
We have a requirement for NRT search. Our autosoft commit time is set to 500
ms(i know its low.But that's another story). We use filter queries
extensively for most of our queries.
But i am trying to understand how filter query caching works with NRT.
Now as i understand we use fq for quer
On 5/11/2018 9:26 AM, Andy C wrote:
> Why are range searches more efficient than wildcard searches? I guess I
> would have expected that they just provide different mechanism for defining
> the range of unique terms that are of interest, and that the merge
> processing would be identical.
I hope I
I am using Solr 6.6.0. I have created collection and uploaded the config
files to zookeeper. I can see the collection and config files from Solr
Admin UI. When I try to Dataimport, I get the following error:
ZKPropertiesWriter Could not read DIH properties from /configs/collection
name/dataimpo
Correction. The solution below did not quite get what we need.
I need the stats reports for the range.
I'll keep digging on this one
On Friday, May 11, 2018 10:59:45 AM PDT, Jim Freeby
wrote:
I found a solution.
If I use tags for the facet range definition and the st
I found a solution.
If I use tags for the facet range definition and the stats definition, I can
include it in the facet pivot
stats=true
stats.field={!tag=piv1 percentiles='50'}price
facet=true
facet.range={!tag=r1}someDate
f.someDate.facet.range.start=2018-01-01T00:00:00Z
f.someDate.facet.range
Deepak
"The greatness of a nation can be judged by the way its animals are
treated. Please stop cruelty to Animals, become a Vegan"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool
"Plant a Tree, Go Green"
Make In India : http://
All,
I'd like to generate stats for the results of a facet range.
For example, calculate the mean sold price over a range of months.
Does anyone know how to do this?
This Jira issue seems to indicate its not yet possible.
[SOLR-6352] Let Stats Hang off of Range Facets - ASF JIRA
|
|
| |
[SOLR-
Shawn,
Why are range searches more efficient than wildcard searches? I guess I
would have expected that they just provide different mechanism for defining
the range of unique terms that are of interest, and that the merge
processing would be identical.
Would a search such as:
field:c*
be more e
On 5/10/2018 8:28 PM, Shivam Omar wrote:
Thanks Shawn, So there are cases when soft commit will not be faster than the
hard commit with openSearcher=true. We have a case where we have to do bulk
deletions in that case will soft commit be faster than hard commits.
I actually have no idea wheth
On 5/10/2018 2:22 PM, Deepak Goel wrote:
Are there any benchmarks for this approach? If not, I can give it a spin.
Also wondering if there are any alternative approach (i guess lucene stores
data in a inverted field format)
Here is the only other query I know of that can find documents missing
*security.json*
{
"authentication":{
"class":"solr.BasicAuthPlugin",
"blockUnknown": true,
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permission
15 matches
Mail list logo