Hopefully this question makes sense.
At the moment I'm using a DisMax query which looks something like the
following (massively cut-down):
?defType=dismax
&q=some query
&qf=field_one^0.5 field_two^1.0
I've got some localisation work coming up where I'd like to use the value
of one, sparsely popu
huge improvement for a static index
>
> this latter isn't a problem though since you don't have a static index
>
> Erick
>
> On Tue, Sep 24, 2013 at 4:13 AM, Neil Prosser
> wrote:
> > Shawn: unfortunately the current problems are with facet.method=enum!
>
ique values. It's possible that
> the total number of unique values isn't scaling with sharding. That is,
> each
> shard may have, say, 90% of all unique terms (number from thin air). Worth
> checking anyway, but a stretch.
>
> This is definitely unusual...
>
> Best
e using for the solr cloud setup?
> 4.0.0 had lots of memory and zk related issues. What's the warmup time for
> your caches? Have you tried disabling the caches?
>
> Is this is static index or you documents are added continuously?
>
> The answers to these questions might help
Apologies for the giant email. Hopefully it makes sense.
We've been trying out SolrCloud to solve some scalability issues with our
current setup and have run into problems. I'd like to describe our current
setup, our queries and the sort of load we see and am hoping someone might
be able to spot t
These machines are managing to get the
whole index into the Linux OS cache. Hopefully the 5GB minimum for field
cache and 8GB heap is what's causing this trouble right now.
On 24 July 2013 19:06, Shawn Heisey wrote:
> On 7/24/2013 10:33 AM, Neil Prosser wrote:
> > The log for
x27;s important
for people. Both servers were running 4.3.1. I've since upgraded to 4.4.0.
If you need any more information or want me to do any filtering let me know.
On 24 July 2013 15:50, Timothy Potter wrote:
> Log messages?
>
> On Wed, Jul 24, 2013 at 1:37 AM, Neil Prosse
Your long GC pauses _might_ be ameliorated by allocating _less_
> > memory to the JVM, counterintuitive as that seems.
>
> or by using G1 :)
>
> See http://blog.sematext.com/2013/06/24/g1-cms-java-garbage-collector/
>
> Otis
> --
> Solr & ElasticSearch Support -- h
I've taken because I'm working with our cluster!
On 22 July 2013 19:26, Lance Norskog wrote:
> Are you feeding Graphite from Solr? If so, how?
>
>
> On 07/19/2013 01:02 AM, Neil Prosser wrote:
>
>> That was overnight so I was unable to track exactly what happened (
Sorry, I should also mention that these leader nodes which are marked as
down can actually still be queried locally with distrib=false with no
problems. Is it possible that they've somehow got themselves out-of-sync?
On 22 July 2013 13:37, Neil Prosser wrote:
> No need to apologi
t. Pardon me if I'm
> repeating stuff you already know!
>
> As far as your nodes coming and going, I've seen some people have
> good results by upping the ZooKeeper timeout limit. So I guess
> my first question is whether the nodes are actually going out of service
> or
verything in the tlog
> to the new node, which might be a source of why it took so long for
> the new node to come back up. At the very least the new node you were
> bringing back online will need to do a full index replication (old
> style) to get caught up.
>
> Best
> Erick
>
While indexing some documents to a SolrCloud cluster (10 machines, 5 shards
and 2 replicas, so one replica on each machine) one of the replicas stopped
receiving documents, while the other replica of the shard continued to grow.
That was overnight so I was unable to track exactly what happened (I'
13 matches
Mail list logo