Hello,
I'm running a 12M document index which I'd like to frequently update. I'm
having problems doing so
(http://old.nabble.com/NullPointerException-thrown-during-updates-to-index-td26613309.html)
and am wondering now if it has to do with the way I'm structuring the
updates. I have a few quest
Just to clarify, the error is being thrown FROM a search, DURING an update.
This error is making distributed SOLR close to unusable for me. Any ideas?
Does SOLR fail on searches if one node takes too long to respond?
hossman wrote:
>
> : Hi,
> : I'm running a distributed solr index (3 nod
hbuow4sa5o+state:results
Erick Erickson wrote:
>
> What version are you using? If a nightly build, from when?
>
> Thanks
> Erick
>
> On Wed, Dec 2, 2009 at 12:53 PM, smock wrote:
>
>>
>> Hi,
>> I'm running a distributed solr index (3 nodes) and have n
Hi,
I'm running a distributed solr index (3 nodes) and have noticed frequent
exceptions thrown during updates. The exception (see below for full trace)
occurs in the mergeIds function of QueryComponents, in this code block:
Map resultIds = new HashMap();
for (int i=resultSize-1; i>=0;
Hello,
I'm receiving a java.net.SocketException (see below) on searches, during
updates to my index. I'm running a distribued index with 3 shards, each
shard has about 3M docs. I add documents in batches of 100, commit after
50K updates, and optimize nightly. The errors happen intermittently du
he FacetComponent does work.
Shalin Shekhar Mangar wrote:
>
> On Sat, Sep 12, 2009 at 1:20 AM, smock wrote:
>
>>
>> I'd like to propose a change to the facet response structure. Currently,
>> it
>> looks like:
>>
>> {'facet_fields':{
I'd like to propose a change to the facet response structure. Currently, it
looks like:
{'facet_fields':{'field1':[('value1',count1),('value2',count2),(null,missingCount)]}}
My immediate problem with this structure is that null is not of the same
type as the 'value's. Also, the meaning of the (
Hello,
I'm trying to use the schema browser (/file/?file=schema.xml) to examine the
schema of my solr installation, but am having problems with character
encodings. Everything I have is defined in UTF-8, and I can read the file
locally with that encoding without any problems. However, in the
we
I'd like to set up case insensitive matching on a facet.prefix, but would
like the facet handler to return the stored value rather than the indexed
value. For instance, if a field value is 'Yes', I'd like facet.prefix to
match on 'yes' but return 'Yes' - is this behavior possible to set up?
Tha
t;
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: smock
>> To: solr-user@lucene.apache.org
>> Sent: Wednesday, March 25, 2009 2:37:26 PM
>> Subject: Solr OpenBitSet OutofMemory Error
&
Hello,
After running a nightly release from around January of Solr for about 4
weeks without any problems, I'm starting to see OutofMemory errors:
Mar 24, 2009 1:35:36 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util
I'm using 1.3 - are the nightly builds stable enough to use in production?
yonik wrote:
>
> Are you on Solr 1.3 or a recent nightly build? The development
> version of 1.4 has a number of scalability enhancements.
>
> -Yonik
>
> On Fri, Jan 9, 2009 at 12:18 AM, sm
Hi Yonik,
In some ways I have a 'small index' (~8 million documents at the moment).
However, I have a lot of attributes (currently about 30, but I'm expecting
that number to keep growing) and am interested in faceting across all of
them for every search (on a completely unrelated note, if you h
meshes with my load testing of Solr (single full index is
performing better than the distributed index). I may have to stick with
Sphinx, though, if I can't boost the performance of Solr on a single box.
-Harish
yonik wrote:
>
> On Thu, Jan 8, 2009 at 10:03 PM, smock wrote:
>> I
essing, it was a net win (I
saw roughly a factor of n speedup, where n was the number of
processors/shards).
Thanks again, for all your help, this has been really useful so far.
-Harish
yonik wrote:
>
> On Thu, Jan 8, 2009 at 9:25 PM, smock wrote:
>> I should have more than enough RAM
g).
Thanks for your help!
-Harish
Mike Klaas wrote:
>
> On 8-Jan-09, at 3:37 PM, smock wrote:
>
>>
>> Assuming I have enough RAM then, should I be able to get a
>> performance boost
>> with my current setup? Basically, the question I am trying to
>
ex in RAM. I'm
not super worried about requests/sec. right now - I'd rather each individual
search be faster, which is why I'm interested in distributing the index
across my 8 procs.
Thanks very much!
-Harish
yonik wrote:
>
> On Thu, Jan 8, 2009 at 4:51 PM, smock wrote:
>
er to Solr.
Thanks again,
-Harish
yonik wrote:
>
> Distributed search requires more work (more than one pass.) If you
> weren't CPU bound to begin with, it's definitely going to make things
> worse by splitting up the index on the same box.
>
> -Yonik
>
>
Hi All,
I'm very new to Solr, and also fairly new Java and servlet containers, etc.
I'm trying to set up Solr on a single machine with a distributed index. My
current implementation uses Tomcat as a servlet container with multiple
instances of Solr being served. Each instance of Solr is a shar
19 matches
Mail list logo