> Shawn Heisey hat am 17. Mai 2017 um 15:10 geschrieben:
>
>
> On 5/17/2017 6:18 AM, Thomas Porschberg wrote:
> > Thank you. I am now a step further.
> > I could import data into the new collection with the DIH. However I
> > observed the following exception
> > in solr.log:
> >
> > request:
+1 to change to new message
A strawman new message could be: "Performance warning: Overlapping
onDeskSearchers=2; consider reducing commit frequency if performance
problems encountered"
On Wed, May 17, 2017 at 1:15 PM, Mike Drob wrote:
> You're committing too frequently, so you have new searche
Hi all:
I am new to Solr, and I am using Solr 6.4.2. I try to add fields and copyFields
to schema programmatically as below. However, in a very few occasions, I see a
few fields are not added but copyFields are added when I try to add a lot of
fields and copyFields (about 80 fields, 40 copyFie
cool, thanks ... easy enough to fix the SQL statement for now ;-)
On Tue, May 16, 2017 at 6:27 PM, Kevin Risden wrote:
> Well didn't take as long as I thought:
> https://issues.apache.org/jira/browse/CALCITE-1306
>
> Once Calcite 1.13 is released we should upgrade and get support for this
> agai
You're committing too frequently, so you have new searchers getting queued
up before the previous ones have been processed.
You have several options on how to deal with this. Can increase commit
interval, add hardware, or reduce query warming.
I don't know if uncommenting that section will help b
Chris, Shawn,
I am using 5.2.1 . Neither the array (Shawn) nor the document list (Chris)
works for me in the Admin panel. However, CSV works fine.
Clearly we are long overdue for an upgrade.
Cheers -- Rick
On May 17, 2017 10:22:28 AM EDT, Shawn Heisey wrote:
>On 5/16/2017 12:41 PM, Rick Leir w
This has been changed already in 6.4.0. From the CHANGES.txt entry:
SOLR-9712: maxWarmingSearchers now defaults to 1, and more importantly
commits will now block if this
limit is exceeded instead of throwing an exception (a good thing).
Consequently there is no longer a
risk in overlapping com
Also, what is your autoSoftCommit setting? That also opens up a new searcher.
On Wed, May 17, 2017 at 8:15 AM, Jason Gerlowski wrote:
> Hey Shawn, others.
>
> This is a pitfall that Solr users seem to run into with some
> frequency. (Anecdotally, I've bookmarked the Lucidworks article you
> refe
Hey Shawn, others.
This is a pitfall that Solr users seem to run into with some
frequency. (Anecdotally, I've bookmarked the Lucidworks article you
referenced because I end up referring people to it often enough.)
The immediate first advice when someone encounters these
onDeckSearcher error mess
Thanks Joel, will try that.
Binary response would be more performant.
I observed the server sends responses in 32 kb chunks and the client reads
it with 8 kb buffer on inputstream. I don't know if changing that can
impact anything on performance. Even if buffer size is increased on
httpclient, it c
On 5/17/2017 2:40 AM, Giedrius wrote:
> I've been using cursorMark for quite a while, but I noticed that
> sometimes the value is huge (more than 8K). It results in Request-URI
> Too Long response. Is there a way to send cursorMark in POST request's
> Body? If it is, could you please provide an exa
On 5/16/2017 12:41 PM, Rick Leir wrote:
> In the Solr Admin Documents tab, with the document type set to JSON, I cannot
> get it to accept more than one document. The legend says "Document(s)". What
> syntax is expected? It rejects an array of documents. Thanks -- Rick
See the box labeled "Addin
Hi,
I've been using cursorMark for quite a while, but I noticed that sometimes
the value is huge (more than 8K). It results in Request-URI Too Long
response. Is there a way to send cursorMark in POST request's Body? If it
is, could you please provide an example? If post is not possilbe, is there
a
On 5/17/2017 6:18 AM, Thomas Porschberg wrote:
> Thank you. I am now a step further.
> I could import data into the new collection with the DIH. However I observed
> the following exception
> in solr.log:
>
> request:
> http://127.0.1.1:8983/solr/hugo_shard1_replica1/update?update.distrib=TOLEAD
On 5/17/2017 5:57 AM, Srinivas Kashyap wrote:
> We are using Solr 5.2.1 version and are currently experiencing below Warning
> in Solr Logging Console:
>
> Performance warning: Overlapping onDeskSearchers=2
>
> Also we encounter,
>
> org.apache.solr.common.SolrException: Error opening new searcher
Susmit,
You could wrap a LimitStream around the outside of all the relational
algebra. For example:
parallel(limit((intersect(intersect(search, search), union(search,
search)
In this scenario the limit would happen on the workers.
As far as the worker/replica ratio. This will depend on how
> Tom Evans hat am 17. Mai 2017 um 11:48 geschrieben:
>
>
> On Wed, May 17, 2017 at 6:28 AM, Thomas Porschberg
> wrote:
> > Hi,
> >
> > I did not manipulating the data dir. What I did was:
> >
> > 1. Downloaded solr-6.5.1.zip
> > 2. ensured no solr process is running
> > 3. unzipped solr-6.5.1.
Hi All,
We are using Solr 5.2.1 version and are currently experiencing below Warning in
Solr Logging Console:
Performance warning: Overlapping onDeskSearchers=2
Also we encounter,
org.apache.solr.common.SolrException: Error opening new searcher. exceeded
limit of maxWarmingSearchers=2, try a
hey erik, totally unaware of those two. we're able to retrieve metadata
about the query itself that way?
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, May 16, 2017 at 1:54 PM, Erik Hatcher
wrote:
On Wed, May 17, 2017 at 6:28 AM, Thomas Porschberg
wrote:
> Hi,
>
> I did not manipulating the data dir. What I did was:
>
> 1. Downloaded solr-6.5.1.zip
> 2. ensured no solr process is running
> 3. unzipped solr-6.5.1.zip to ~/solr_new2/solr-6.5.1
> 3. started an external zookeeper
> 4. copied a
Hi,
I want to group huge documents with 3 or 4 fields, but CollapsingQParserPlugin
works only with one field.
If I add more than one fq for CollapsingQParserPlugin for different fields, it
doesn't group the fields but it uses the group of the first field as input to
the second one and hence gi
21 matches
Mail list logo