We have to SOLR colos.
We issues a command to delete: IDS DELETED: 1000236662963,
1000224906023, 1000240171970, 1000241597424, 1000241604072,
1000241604073, 1000240171754, 1000241604056, 1000241604062,
1000237569503]
COLO1 deleted everything but COLO2 skip
I've noticed something weird since implementing the change Shawn suggested,
I wonder if someone can shed some light on it:
Since changing from delete by query _root_:.. to querying for ids _root_:
and then deleteById(ids from root query), we have started to notice some
facet counts for child docum
Thanks Amrit. Can you explain a bit more what kind of requests won't be
logged? Is that something configurable for solr?
Best,
Wei
On Thu, Nov 9, 2017 at 3:12 AM, Amrit Sarkar wrote:
> Wei,
>
> Are the requests coming through to collection has multiple shards and
> replicas. Please mind a upda
Hi Erick,
I have added the apache-mime4j-core-0.7.2.jar in the Java Build Path of the
Eclipse, but it is also not working.
Regards,
Edwin
On 13 November 2017 at 23:33, Erick Erickson
wrote:
> Where are you getting your mime4j file? MimeConfig is in
> /extraction/lib/apache-mime4j-core-0.7.2.ja
Have you considered collection aliasing? You can create an alias that
points to multiple collections. So you could keep specific collections
and have aliases that encompass your regions
The one caveat here is that sorting the final result set by score will
require that the collections be rough
Hi Chris,
I assumed that you apply some sort of fq=price:[100 TO 200] to focus on wanted
products.
Can you share full json faceting request - numFound:0 suggest that something is
completely wrong.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch C
I am happy to report that <1> fixed these:
PERFORMANCE WARNING: Overlapping onDeckSearchers=2
We still occasionnally see timeouts so we may have to explore <2>.
On Thu, Oct 26, 2017 at 12:12 PM, Fengtan wrote:
> Thanks Erick and Emir -- we are going to start with <1> and possibly <2>.
>
>
�
Hi Emir,
I can't apply filters to the original query because I don't know in advance
which filters will meet the criterion I'm looking for.� Unless I'm missing
something obvious.��
�
I tried the JSON facet you suggested but received
"response":{"numFound":0,"start":0,"maxScore":0.0,
Hi,
I'm looking for some input on design considerations for defining
collections in a SolrCloud cluster. Right now, our cluster consists of two
collections in a 2 shard / 2 replica mode. Each collection has a dedicated
set of source and don't overlap, which made it an easy decision.
Recently,
Hi Vincenzo,
It is not perfect, but you could achieve something similar using _query_ hook,
e.g.:
&defType=lucene&q=_query_:”{defType=edismax qf=‘f1 f2’ mm=‘2’}my query” OR
_query_:”{defType=edismax qf=‘f3 f4’ mm=‘1’}my query”
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detect
We are actually very close to doing what Shawn has suggested.
Emir has a good point about new collections failing on deletes/updates of
older documents which were not present in the new collection. But even if
this
feature can be implemented for an append-only log, it would make a good
feature IMO
Hi Chris,
You mention it returns all manufacturers? Even after you apply filters (don’t
see filter in your example)? You can control how many facets are returned with
facet.limit and you can use face.pivot.mincount to determine how many facets
are returned. If you calculate sum on all manufactur
thanks for your reply.
I'm not seeing any documentation explaining exactly how the weightField is
used.
So, is it just a field which I define on each document and populate with
some number during index. And during search it will be used to sort the
suggestions?
--
Sent from: http://lucene.47206
Hi Alessandro,
I'm currently on Solr version 6.2.1, but will soon be moving to 6.6. I'm
not using DirectSolrSpellcheck, but using Index and File based.
The words I was testing against are definitely available in the File and
possibly in the Index as well.
What I found was if I don't set the maxRe
Which Solr version are you using ?
>From the documentation :
"Only query words, which are absent in index or too rare ones (below
maxQueryFrequency ) are considered as misspelled and used for finding
suggestions.
...
These parameters (maxQueryFrequency and thresholdTokenFrequency) can be a
percen
By definition mm is applied across all the fields you define in "df"
in your edismax handler.
You can always override df on a per-query basis, but there's no way
that I know of to say "mm really only applies to fields a, b, c even
though df is set to a, b, c, d, e, f
Best,
Erick
On Mon, Nov
Where are you getting your mime4j file? MimeConfig is in
/extraction/lib/apache-mime4j-core-0.7.2.jar and you need to make sure
you're including that at a guess.
Best,
Erick
On Mon, Nov 13, 2017 at 6:15 AM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> I am using Solr 7.1.0, and I am trying to index EML
It depends how you want to use the payloads.
If you want to use the payloads to calculate additional features, you can
implement a payload feature:
This feature could calculate the sum of numerical payload for the query
terms in each document ( so it will be a query dependent feature and will
lev
Surely someone else can chim in;
but when you say: "so regarding to it we need to index the particular
> client data into particular shard so if its manageable than we will
> improve the performance as we need"
You can / should create different collections for different client data, so
that you
Hi,
I am using Solr 7.1.0, and I am trying to index EML files using the
SimplePostTools.
However, I get the following error
java.lang.NoClassDefFoundError:
org/apache/james/mime4j/stream/MimeConfig$Builder
Is there any we class or dependencies which I need to add as compared to
Solr 6?
The in
Thanks Amrit,
My requirement to achieve best performance while using document routing
facility in solr so regarding to it we need to index the particular client data
into particular shard so if its manageable than we will improve the
performance as we need.
Please do needful.
Regards,
--
In your *solr.in.sh*, set
# By default the start script uses "localhost"; override the hostname here
# for production SolrCloud environments to control the hostname exposed to
cluster state
SOLR_HOST=
Younge, Kent A - Norman, OK - Contractor wrote
> Hello,
>
> I am getting an error message whe
Hi All,
Not sure if what I'm asking is possible, but I'm looking for a way to
define minimum should match ("mm") only for few fields in a query.
And if possibile, is there any chance to configure this behaviour in a
request handler in solrconfig.xml?
Best regards,
Vincenzo
Hi all,
we are looking for a way to use solr functionality to return the overlapping
area result from polygon intersection. The intersection as far as we know will
return all polygons that intersect with the given radius, but we are interested
on the overlap part only. In our index, we store p
Hello,
I was just wondering if the in-place doc values updates are real time in
Solr 7. Since these fields are neither indexed, nor stored, and are only
present in doc values space, are the updates to these fields real time?
I tried on my local Solr 7 instance, and the updates are becoming only
v
25 matches
Mail list logo