This may have been introduced by changes made to solve
https://issues.apache.org/jira/browse/SOLR-5968
I created https://issues.apache.org/jira/browse/SOLR-6501 to track the new
bug.
On Tue, Sep 9, 2014 at 4:53 PM, Mike Hugo wrote:
> Hello,
>
> With Solr 4.7 we had some queries th
Hello,
With Solr 4.7 we had some queries that return dynamic fields by passing in
a fl=*_exact parameter; this is not working for us after upgrading to Solr
4.10.0. This appears to only be a problem when requesting wildcarded
fields via SolrJ
With Solr 4.10.0 - I downloaded the binary and set up
Greg and I are talking about the same type of parallel.
We do the same thing - if I know there are 10,000 results, we can chunk
that up across multiple worker threads up front without having to page
through the results. We know there are 10 chunks of 1,000, so we can have
one thread process 0-100
#x27; and 'rows'?
>
> 4 or 5 requests still seems a very low limit to be running into an OOM
> issues though, so perhaps it is both issues combined?
>
> Ta,
> Greg
>
>
>
> On 18 March 2014 07:49, Mike Hugo wrote:
>
> > Thanks!
> >
> >
&
day for release
> propogation to the Apache mirrors): i.e., next Friday-ish.
>
> Steve
>
> On Mar 17, 2014, at 4:40 PM, Mike Hugo wrote:
>
> > Thanks Steve,
> >
> > That certainly looks like it could be the culprit. Any word on a release
> > date for 4.7.1? Da
fixed by a commit under) SOLR-5875: <
> https://issues.apache.org/jira/browse/SOLR-5875>.
>
> If you can build from source, it would be great if you could confirm the
> fix addresses the issue you're facing.
>
> This fix will be part of a to-be-released Solr 4.7.1.
>
&
I should add each node has 16G of ram, 8GB of which is allocated to the
JVM. Each node has about 200k docs and happily uses only about 3 or 4gb of
ram during normal operation. It's only during this deep pagination that we
have seen OOM errors.
On Mon, Mar 17, 2014 at 3:14 PM, Mike Hugo
Hello,
We recently upgraded to Solr Cloud 4.7 (went from a single node Solr 4.0
instance to 3 node Solr 4.7 cluster).
Part of out application does an automated traversal of all documents that
match a specific query. It does this by iterating through results by
setting the start and rows paramete
Or
> perhaps it's in 4.7, I don't know, this JIRA issue is a little confusing as
> it's still open, though it looks like stuff has been committed:
> https://issues.apache.org/jira/browse/SOLR-5130
> --
> Mark Miller
> about.me/markrmiller
>
> On March 12, 2014
After a collection has been created in SolrCloud, is there a way to modify
the Replication Factor?
Say I start with a few nodes in the cluster, and have a replication factor
of 2. Over time, the index grows and we add more nodes to the cluster, can
I increase the replication factor to 3?
Thanks!
;t working for me either.
> Oh well.
>
> But... it should work for the LucidWorks Search query parser!
>
> -- Jack Krupansky
>
> -Original Message- From: Mike Hugo
> Sent: Tuesday, May 21, 2013 11:26 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Expanding sets
s but returns 0 results
{!surround}("common lisp" OR "assembly language") W (programming)
On Tue, May 21, 2013 at 8:32 AM, Jack Krupansky wrote:
> I'll make sure to include that specific example in the new Solr book.
>
>
> -- Jack Krupansky
>
> -
nd&**indent=true"
>
> The LucidWorks Search query parser also supports NEAR, BEFORE, and AFTER
> operators, in conjunction with OR and "-" to generate span queries:
>
> q=(java OR groovy OR scala) BEFORE:0 (programming OR coding OR development)
>
>
Is there a way to query for combinations of two sets of words? For
example, if I had
(java or groovy or scala)
(programming or coding or development)
Is there a query parser that, at query time, would expand that into
combinations like
java programming
groovy programming
scala programming
java
Does anyone know if a version of ConcurrentUpdateSolrServer exists that
would use the size in memory of the queue to decide when to send documents
to the solr server?
For example, if I set up a ConcurrentUpdateSolrServer with 4 threads and a
batch size of 200 that works if my documents are small.
Explicitly running an optimize on the index via the admin screens solved
this problem - the correct counts are now being returned.
On Tue, May 22, 2012 at 4:33 PM, Mike Hugo wrote:
> We're testing a snapshot of Solr4 and I'm looking at some of the responses
> from the Luke
We're testing a snapshot of Solr4 and I'm looking at some of the responses
from the Luke request handler. Everything looks good so far, with the
exception of the "distinct" attribute which (in Solr3) shows me the
distinct number of terms for a given field.
Given the request below, I'm consistentl
wrote:
> Hello Mike,
>
> have a look at Solr's Schema Browser. Click on "FIELDS", select "label"
> and have a look at the number of distinct (term-)values.
>
> Regards,
> Em
>
>
> Am 15.02.2012 23:07, schrieb Mike Hugo:
> > Hello,
&
Hello,
We're building an auto suggest component based on the "label" field of
documents. Is there a way to see how many terms are in the dictionary, or
how much memory it's taking up? I looked on the statistics page but didn't
find anything obvious.
Thanks in advance,
Mike
ps- here's the conf
own Mike!
> I'm going to start looking into this now...
>
> -Yonik
> lucidimagination.com
>
>
>
> On Thu, Jan 26, 2012 at 11:06 PM, Mike Hugo wrote:
> > I created issue https://issues.apache.org/jira/browse/SOLR-3062 for this
> > problem. I was able to tra
I've been looking into this a bit further and am trying to figure out why
the FQ isn't getting applied.
Can anyone point me to a good spot in the code to start looking at how FQ
parameters are applied to query results in Solr4?
Thanks,
Mike
On Thu, Jan 26, 2012 at 10:06 PM, Mike H
bits() method, returning all documents in a random access way
) - before that commit the join / fq functionality works as expected /
documented on the wiki page. After that commit it's broken.
Any assistance is greatly appreciated!
Thanks,
Mike
On Thu, Jan 26, 2012 at 11:04 AM, Mike Hugo wrote:
Hello,
I'm trying out the Solr JOIN query functionality on trunk. I have the
latest checkout, revision #1236272 - I did the following steps to get the
example up and running:
cd solr
ant example
java -jar start.jar
cd exampledocs
java -jar post.jar *.xml
Then I tried a few of the sample queries
gt; Steve
>
> > -Original Message-
> > From: Mike Hugo [mailto:m...@piragua.com]
> > Sent: Tuesday, January 24, 2012 3:56 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: HTMLStripCharFilterFactory not working in Solr4?
> >
> > Thanks for the
Word Break rules find UAX#29 <
> http://unicode.org/reports/tr29/> to find token boundaries, and then
> outputs only alphanumeric tokens. See the JFlex grammar for details: <
> http://svn.apache.org/viewvc/lucene/dev/trunk/modules/analysis/common/src/java/org/apache/lucene/anal
previous behavior.
> See https://issues.apache.org/jira/browse/LUCENE-3690 for more details.
>
> -Yonik
> http://www.lucidimagination.com
>
>
>
> On Tue, Jan 24, 2012 at 1:34 PM, Mike Hugo wrote:
> > We recently updated to the latest build of Solr4 and everything is
&g
We recently updated to the latest build of Solr4 and everything is working
really well so far! There is one case that is not working the same way it
was in Solr 3.4 - we strip out certain HTML constructs (like trademark and
registered, for example) in a field as defined below - it was working in
S
27 matches
Mail list logo