Having the same data as two cores (even on different disks) on the same
instance is, I'd say, pointless.
Basically, Solr makes heavy use of memory to cache the data you have on
disk - whether as in-heap caches, or the OS disk cache, i.e. memory not
allocated to the JVM.
By having two cores the sa
I don't have time now, but I'll keep that in mind.
Thanks.
2016-07-19 12:27 GMT-03:00 Erick Erickson :
> There is a lot of activity in the ParallelSQL world, all being done
> by a very few people, so it's a matter of priorities. Can you
> consider submitting a patch?
>
> Best,
> Erick
>
> On Tue
Rajesh,
I would definitely suggest you to debug your use case or at least to give
much more details :
"is not giving me desired output" is basically useless, try to describe
what was your desidered output, what you get instead, if you have any log,
ect...
Cheers
On Tue, Jul 19, 2016 at 8:42 AM,
Hi,
Not by getting desired output i mean is that I am not able to filter out
the result on the basis of CFQ parameter, and in case I am querying SOLR
with CFQ value to be as normal string, i.e. no special character, it is
working fine.
Do let me know in case more info required. For me my main pro
Yes, it's the intended behavior. The whole point of the
onlyIfDown flag was as a safety valve for those
who wanted to be cautious and guard against typos
and the like.
If you specify onlyIfDown=false and the node still
isn't removed from ZK, it's not right.
Best,
Erick
On Tue, Jul 19, 2016 at 10
Banana has a facet panel that allows you to configure several fields to
facet on, you can have multiple fields and they will show up as an
accordion. However, keep in mind that the field needs to be tokenized for
faceting (i.e. string) and upon selection the filter is added to the fq
parameter in t
Nick, Thanks for your help. I'll test it out and respond back.
On Wed, Jul 20, 2016 at 11:52 AM, Nick Vasilyev
wrote:
> Banana has a facet panel that allows you to configure several fields to
> facet on, you can have multiple fields and they will show up as an
> accordion. However, keep in mind
Hi,
I am fairly new to Solr and Banana. I would like to configure my Banana
dashboard with the indexed Solr collection.
I would like to have the result as a data grid alone.
I need to make multiple filters to the fields in the data, and make it
configurable.
Eg: My Panel would be a table, w
Hello folks,
I am fairly new to solr + banana, especially banana.
I am trying to configure banana for faceted search for a collection in
solr.
I want to be able to have multiple facets parameters on the left and see
the results of selections on my data table on the right. Exactly like
guided Nav
Hi,
I am trying to setup SolrCloud in Azure with following system requirement.
Solr - 4 VM for 4 solr nodes
Zookeeper - 3 VM for zookeeper cluster
I have found the ARM template for zookeeper which will create zookeeper
cluster with n nodes, but dont find any for solr.
Does anyone tried to configu
-- Forwarded message --
From: "Pushkar Raste"
Date: Jul 20, 2016 11:08 AM
Subject: About SOLR-9310
To:
Cc:
Hi,
https://issues.apache.org/jira/browse/SOLR-9310
PeerSync replication in SOLR seems to be completely broken since
fingerprint check was introduced. (or may be some other
I am getting an OOM error trying to combine streaming operations. I think the
sort is the issue. This test was done on a single replica cloud setup of
v6.1 with 4GB heap. col1 has 1M docs. col2 has 10k docs. The search for each
collection was q=*:*. Using SolrJ:
CloudSolrStream searchStream = new
Hi - I'm not sure how to enable autoAddReplicas to be true for
collections. According to here:
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
it is specified in solr.xml, but I tried adding:
true
and that results in an error. What am I doing wrong?
Thanks!
-Joe
Maybe the problem here is some confusion/ambuguity about the meaning of
"down" ?
TL;DR: think of "onlyIfDown" as "onlyIfShutDownCleanly"
IIUC, the purpose of the 'onlyIfDown' is a safety valve so (by default)
the cluster will prevent you from removing a replica that wasn't shutdown
*cleanly
Eric,
My test procedure has been
- empty OS cache
- reload Solr cores (to empty solr caches)
- execute the 3M queries with users/thread = 50 (using JMeter)
- record Solr reported stats and JMeter stats
Thank you for pointing me in the right direction of CPU not doing much work.
That led me to
autoAddReplicas is _not_ specified in solr.xml. The things you can
change in solr.xml are some of the properties used in dealing with
collections _created_ with autoAddReplicas. See the CREATE action in
the collections API here:
https://cwiki.apache.org/confluence/display/solr/Collections+API#Colle
It's likely that the SortStream is the issue. With the sort function you
need enough memory to sort all the tuples coming from the underlying
stream. The sort stream can also be done in parallel so you can split the
tuples from col1 across N worker nodes. This will give you faster sorting
and apply
Thank you Erick! I miss-read the webpage.
-Joe
On 7/20/2016 7:57 PM, Erick Erickson wrote:
autoAddReplicas is _not_ specified in solr.xml. The things you can
change in solr.xml are some of the properties used in dealing with
collections _created_ with autoAddReplicas. See the CREATE action in
hi i try to indexing spatial format to solr 5.5.0 but i've got this
error message.
error1
error2
anybody can help me to solve this pls?
Thanks a lot everyone!
By setting onlyIfDown=false, it did remove the replica. But still return a
failure message.
That confuse me.
Anyway, thanks Erick and Chris.
Regards,
Jerome
On Thu, Jul 21, 2016 at 5:47 AM, Chris Hostetter
wrote:
>
> Maybe the problem here is some confusion/ambuguity ab
I'm hoping I'm just not using the streaming API correctly. I have about 30M
docs (~ 15 collections) in production right now that work well with just 4GB
of heap (no streaming). I can't believe streaming would choke on my test
data.
I guess there are 2 primary requirements. Reindexing an entire col
One of the things to consider is using a hashJoin on first and second
joins. If you have one large table and two smaller tables the hashJoin
makes a lot of sense.
One possible flow would be:
parallel reduce to do the grouping
hashJoin to the second table
hashJoin to the third table
The hashJoins
22 matches
Mail list logo