I tried my last proposition, editing the clusterstate.json to add a dummy
frontend shard seems to work. I made sure the ranges were not overlapping.
Doesn't it resolve the solr cloud issue as specified above?
Would adding a dummy shard instead of a dummy collection would resolve the
situation? - e.g. editing clusterstate.json from a zookeeper client and
adding a shard with a 0-range so no docs are routed to this core. This core
would be on a separate server and act as the collection gateway.
What Shawn has described is exactly what we do: classical distributed
no-SolrCloud setup. This is why it was possible to implement a custom
frontend solr instance.
On Wed, Oct 2, 2013 at 12:42 AM, Shawn Heisey wrote:
> On 10/1/2013 2:35 PM, Isaac Hebsh wrote:
>
>> Hi Dmitry,
>>
>> I'm trying to
On 10/1/2013 4:04 PM, Isaac Hebsh wrote:
Hi Shawn,
I know that every node operates as a frontend. This is the way our cluster
currently run.
If I seperate the frontend from the nodes which hold the shards, I can let
him different amount of CPUs as RAM. (e.g. large amount of RAM to JVM,
because t
Hi Shawn,
I know that every node operates as a frontend. This is the way our cluster
currently run.
If I seperate the frontend from the nodes which hold the shards, I can let
him different amount of CPUs as RAM. (e.g. large amount of RAM to JVM,
because this server won't need the OS cache for read
On 10/1/2013 2:35 PM, Isaac Hebsh wrote:
Hi Dmitry,
I'm trying to examine your suggestion to create a frontend node. It sounds
pretty usefull.
I saw that every node in solr cluster can serve request for any collection,
even if it does not hold a core of that collection. because of that, I
though
Hi Dmitry,
I'm trying to examine your suggestion to create a frontend node. It sounds
pretty usefull.
I saw that every node in solr cluster can serve request for any collection,
even if it does not hold a core of that collection. because of that, I
thought that adding a new node to the cluster (ak
Manuel,
Whether to have the front end solr as aggregator of shard results depends
on your requirements. To repeat, we found merging from many shards very
inefficient fo our use case. It can be the opposite for you (i.e. requires
testing). There are some limitations with distributed search, see her
Dmitry - currently we don't have such a front end, this sounds like a good
idea creating it. And yes, we do query all 36 shards every query.
Mikhail - I do think 1 minute is enough data, as during this exact minute I
had a single query running (that took a qtime of 1 minute). I wanted to
isolate t
Hi Manuel,
The frontend solr instance is the one that does not have its own index and
is doing merging of the results. Is this the case? If yes, are all 36
shards always queried?
Dmitry
On Mon, Sep 9, 2013 at 10:11 PM, Manuel Le Normand <
manuel.lenorm...@gmail.com> wrote:
> Hi Dmitry,
>
> I h
Hello Manuel,
1 minute sampling brings too few data. Lowering termindex should help,
however I don't know how FST really behaves on in. It definitely helped at
3.x;
Would you mind if I ask which OS you have and which Directory
implementation is used actually?
On Sun, Sep 8, 2013 at 7:56 PM, Manu
Hi Dmitry,
I have solr 4.3 and every query is distributed and merged back for ranking
purpose.
What do you mean by frontend solr?
On Mon, Sep 9, 2013 at 2:12 PM, Dmitry Kan wrote:
> are you querying your shards via a frontend solr? We have noticed, that
> querying becomes much faster if resul
are you querying your shards via a frontend solr? We have noticed, that
querying becomes much faster if results merging can be avoided.
Dmitry
On Sun, Sep 8, 2013 at 6:56 PM, Manuel Le Normand <
manuel.lenorm...@gmail.com> wrote:
> Hello all
> Looking on the 10% slowest queries, I get very bad
Hello all
Looking on the 10% slowest queries, I get very bad performances (~60 sec
per query).
These queries have lots of conditions on my main field (more than a
hundred), including phrase queries and rows=1000. I do return only id's
though.
I can quite firmly say that this bad performance is due
14 matches
Mail list logo