When i look at the solr logs i find the below exception
Caused by: java.io.IOException: Invalid JSON type java.lang.String,
expected Map
at
org.apache.solr.schema.JsonPreAnalyzedParser.parse(JsonPreAnalyzedParser.java:86)
at
org.apache.solr.schema.PreAnalyzedField$PreAnalyzedTokenizer.decodeInput(
When i am firing query it returns the doc as expected. (Example:
q=synthesis)
I am facing the problem when i include wildcard character in the query.
(Example: q=synthesi*)
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at http://localhost:8983/solr/Metad
From: Shawn Heisey
Reply-To: "solr-user@lucene.apache.org"
Date: Tuesday, December 5, 2017 at 1:31 PM
To: "solr-user@lucene.apache.org"
Subject: Re: Dataimport handler showing idle status with multiple shards
On 12/5/2017 10:47 AM, Sarah Weissman wrote:
I’ve recently been using the dataimport
Thanks Walter. Your case does apply as both data stores do indeed cover the
same kind of material, with many important terms in common. "source" + fq:
coming up.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Tuesday, 5 December 2017 5:51 p.m.
To: solr-use
On 12/5/2017 10:47 AM, Sarah Weissman wrote:
I’ve recently been using the dataimport handler to import records from a
database into a Solr cloud collection with multiple shards. I have 6 dataimport
handlers configured on 6 different paths all running simultaneously against the
same DB. I’ve no
Hi,
I’ve recently been using the dataimport handler to import records from a
database into a Solr cloud collection with multiple shards. I have 6 dataimport
handlers configured on 6 different paths all running simultaneously against the
same DB. I’ve noticed that when I do this I often get “idl
No custom code at all.
On Dec 5, 2017 10:31 PM, "Erick Erickson" wrote:
> Do you have any custom code in the mix anywhere?
>
> On Tue, Dec 5, 2017 at 5:02 AM, Rick Dig wrote:
> > Hello all,
> > is it normal to have many instances (100+) of SolrIndexSearchers to be
> open
> > at the same time? O
It is challenging as the performance of different use cases and domains
will by very dependent on the use case (there's no one globally perfect
relevance solution). But a good set of metrics to see *generally* how stock
Solr performs across a reasonable set of verticals would be nice.
My philosoph
Do you have any custom code in the mix anywhere?
On Tue, Dec 5, 2017 at 5:02 AM, Rick Dig wrote:
> Hello all,
> is it normal to have many instances (100+) of SolrIndexSearchers to be open
> at the same time? Our Heap Analysis shows this to be the case.
>
> We have autoCommit for every 5 minutes,
HTTP request log, not solr.log.
This is intra-cluster:
10.98.15.241 - - [29/Oct/2017:23:59:57 +] "POST
//sc16.prod2.cloud.cheggnet.com:8983/solr/questions_shard4_replica8/auto
HTTP/1.1" 200 194
This is from outside (yes, we have long queries):
10.98.15.110 - - [29/Oct/2017:23:59:58 +]
first of all, i'm using solr 7.1.0 ...
i took a look into the logfile of solr and see the follwowing 2 log statements
for query "test":
4350609 INFO (qtp1918627686-691) [c:gettingstarted s:shard1 r:core_node5
x:gettingstarted_shard1_replica_n2] o.a.s.c.S.Request
[gettingstarted_shard1_replica
Thanks Yonik and thanks Doug.
I agree with Doug in adding few generics test corpora Jenkins automatically
runs some metrics on, to evaluate Apache Lucene/Solr changes don't affect a
golden truth too much.
This of course can be very complex, but I think it is a direction the Apache
Lucene/Solr comm
Anybody have a favorite profiler to use with Solr? I’ve been asked to look at
why out queries are slow on a detail level.
Personally, I think they are slow because they are so long, up to 40 terms.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
In 6.5.1, the intra-cluster requests are POST, which makes them easy to
distinguish in the request logs. Also, the intra-cluster requests go to a
specific core instead of to the collection. So we use the request logs and grep
out the GET lines.
We are considering fronting every Solr process wit
Ok, I found the solution myself.
Reason for this behaviour was the "lowernames = true"-configuration of the
Tika-requestHandler, that transformed the "module-id" to "module_id".
I added a fitting copyField to my schema and it seems to work now.
Maybe, this information is useful for someone ...
Hi,
I have implemented implicit routing with below configuration.
Created one default collection manually 'AMS_Config' which contains
configurations files schema,solrconfig etc.
Using 'AMS_Config' I have created 2 collections model,workset respectively with
below command which created 2 shard
Just a piece of feedback from clients on the original docCount change.
I have seen several cases with clients where the switch to docCount
surprised and harmed relevance.
More broadly, I’m concerned when we make these changes there’s not a
testing process against test corpuses with judgments and
Hello all,
is it normal to have many instances (100+) of SolrIndexSearchers to be open
at the same time? Our Heap Analysis shows this to be the case.
We have autoCommit for every 5 minutes, with openSearcher=true, would this
close the old searcher and create a new one or just create a new one with
On Tue, Dec 5, 2017 at 5:15 AM, alessandro.benedetti
wrote:
> "Lucene/Solr doesn't actually delete documents when you delete them, it
> just marks them as deleted. I'm pretty sure that the difference between
> docCount and maxDoc is deleted documents. Maybe I don't understand what
> I'm talking
To be more precisely and provide some more details, i tried to simplify the
problem by using the Solr-examples that were delivered with the solr
So i started bin/solr -e cloud, using 2 nodes, 2 shards and replication of 2.
To understand the following, it might be important to know, which por
"Lucene/Solr doesn't actually delete documents when you delete them, it
just marks them as deleted. I'm pretty sure that the difference between
docCount and maxDoc is deleted documents. Maybe I don't understand what
I'm talking about, but that is the best I can come up with. "
Thanks Shawn, y
Tom,
Thank you for trying out bunch of things with CDCR setup. I am successfully
able to replicate the exact issue on my setup, this is a problem.
I have opened a JIRA for the same:
https://issues.apache.org/jira/browse/SOLR-11724. Feel free to add any
relevant details as you like.
Amrit Sarkar
Hi!
I am trying to index RTF-files by uploading them to the Solr-Server with
CURL.
I am trying to pass the required metadata by the
"literal.="-statement.
The "id" and the "module-id" are mandatory in my schema.
The "id" is recognized correctly, as one can see in the Solr-response
"doc=48a0xxx
Hi Stefan,
I am not aware of option to log only client side queries, but I think that you
can find workaround with what you currently have. If you take a look at log
lines for query that comes from the client and one that is result of querying
shards, you will see differences - the most simple o
24 matches
Mail list logo