u're seeing _sure_ sounds similar
Best
Erick
On Mon, Nov 19, 2012 at 12:49 PM, Buttler, David wrote:
> Answers inline below
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Saturday, November 17, 2012 6:40 AM
> To: solr-us
If you just want to store the data, you can dump it into HDFS sequence files.
While HBase is really nice if you want to process and serve data real-time, it
adds overhead to use it as pure storage.
Dave
-Original Message-
From: Cool Techi [mailto:cooltec...@outlook.com]
Sent: Friday, N
tstrap_confdir=conf -jar start.jar
DB> load data
DB> Let me know if something is unclear. I can run through the process again
and document it more carefully.
DB>
DB> Thanks for looking at it,
DB> Dave
Best
Erick
On Fri, Nov 16, 2012 at 2:55 PM, Buttler, David wrote:
> My t
if that affects things?
Trying to determine if this is a known issue or not.
- Mark
On Nov 16, 2012, at 1:34 PM, "Buttler, David" wrote:
> Hi all,
> I buried an issue in my last post, so let me pop it up.
>
> I have a cluster with 10 collections on it. The first collec
Hi all,
I buried an issue in my last post, so let me pop it up.
I have a cluster with 10 collections on it. The first collection I loaded
works perfectly. But every subsequent collection returns an inconsistent
number of results for each query. The queries can be simply *:*, or more
complex
rkill, unless you expect to expand
significantly. 20m would likely be okay with two or three shards. You
can store the indexes for each core on different disks which can give
ome performance benefit.
Just some thoughts.
Upayavira
On Thu, Nov 15, 2012, at 11:04 PM, Buttler, David wrote:
> Hi,
&g
Hi,
I have a question about the optimal way to distribute solr indexes across a
cloud. I have a small number of collections (less than 10). And a small
cluster (6 nodes), but each node has several disks - 5 of which I am using for
my solr indexes. The cluster is also a hadoop cluster, so the
iller [mailto:markrmil...@gmail.com]
Sent: Thursday, August 23, 2012 6:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Cloud assigning incorrect port to shards
Can you post your solr.xml file?
On Thursday, August 23, 2012, Buttler, David wrote:
> I am using the jetty container from the examp
Subject: Re: Cloud assigning incorrect port to shards
What container are you using?
Sent from my iPhone
On Aug 22, 2012, at 3:14 PM, "Buttler, David" wrote:
> Hi,
> I have set up a Solr 4 beta cloud cluster. I have uploaded a config
> directory, and linked it with a
Hi,
I have set up a Solr 4 beta cloud cluster. I have uploaded a config directory,
and linked it with a configuration name.
I have started two solr on two computers and added a couple of shards using the
Core Admin function on the admin page.
When I go to the admin cloud view, the shards all h
21, 2012 at 4:46 PM, Buttler, David wrote:
> Hi all,
> I would like to use a single zookeeper cluster to manage multiple Solr cloud
> installations. However, the current design of how Solr uses zookeeper seems
> to preclude that. Have I missed a configuration option to set a zooke
Hi all,
I would like to use a single zookeeper cluster to manage multiple Solr cloud
installations. However, the current design of how Solr uses zookeeper seems to
preclude that. Have I missed a configuration option to set a zookeeper prefix
for all of a Solr cloud configuration directories?
Is there a way to make the shards.tolerant=true behavior the default behavior?
-Original Message-
From: Buttler, David [mailto:buttl...@llnl.gov]
Sent: Thursday, August 16, 2012 11:01 AM
To: solr-user@lucene.apache.org
Subject: solr 4 degraded behavior failure
Hi all,
I am testing out
Hi all,
I am testing out the could features in Solr 4, and I have a observation about
the behavior under failure.
Following the cloud tutorial, I set up a collection with 2 shards. I started 4
servers (so each shard is replicated twice). I added the test documents, and
everything works fine.
You are not putting these files in /tmp are you? That is sometimes wiped by
different OS's on shutdown
-Original Message-
From: vempap [mailto:phani.vemp...@emc.com]
Sent: Wednesday, August 15, 2012 3:31 PM
To: solr-user@lucene.apache.org
Subject: Re: solr.xml entries got deleted when
Here are my steps:
1) Download apache-solr-4.0.0-BETA
2) Untar into a directory
3) cp -r example example2
4) cp -r example exampleB
5) cp -r example example2B
6) cd example; java -Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=myconf -DzkRun
I just downloaded the solr 4 beta, and was running through the tutorial. It
seemed to me that I was getting duplicate counts in my facet fields when I had
two shards and four cores running. For example,
http://localhost:8983/solr/collection1/browse
Reports 21 entries in the facet cat:electronic
I am using the solr cloud branch on 6 machines. I first load PubMed into
HBase, and then push the fields I care about to solr. Indexing from HBase to
solr takes about 18 minutes. Loading to hbase takes a little longer (2
hours?), but it only happens once so I haven't spent much time trying to
18 matches
Mail list logo