ine if you have more questions.
Regards
> From: mlie...@impetus.com
> To: solr-user@lucene.apache.org
> Subject: Re: Solr with Hadoop
> Date: Thu, 18 Jul 2013 15:41:36 +
>
> Rajesh,
>
> If you require to have an integration between Solr and Hadoop or NoSQL, I
&g
Rajesh,
If you require to have an integration between Solr and Hadoop or NoSQL, I
would recommend using a commercial distribution. I think most are free to
use as long as you don't require support.
I inquired about the Cloudera Search capability, but it seems like that
far it is just preliminary:
> If you do distributed indexing correctly, what about updating the documents
> and what about replicating them correctly?
Yes, you can do you and it'll work great.
On Mon, Jul 5, 2010 at 7:42 AM, MitchK wrote:
>
> I need to revive this discussion...
>
> If you do distributed indexing correctly,
I need to revive this discussion...
If you do distributed indexing correctly, what about updating the documents
and what about replicating them correctly?
Does this work? Or wasn't this an issue?
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-wit
al Message
> From: Jon Baer
> To: solr-user@lucene.apache.org
> Sent: Tue, June 22, 2010 12:47:14 PM
> Subject: Re: solr with hadoop
>
> I was playing around w/ Sqoop the other day, its a simple Cloudera tool for
> imports (mysql -> hdfs) @
> href="http://www.clouder
ematext.com/ -- Lucene - Solr - Nutch
>>
>> - Original Message
>> From: Stu Hood
>> To: solr-user@lucene.apache.org
>> Sent: Monday, January 7, 2008 7:14:20 PM
>> Subject: Re: solr with hadoop
>>
>> As Mike suggested, we use Hadoop t
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
> - Original Message
> From: Stu Hood
> To: solr-user@lucene.apache.org
> Sent: Monday, January 7, 2008 7:14:20 PM
> Subject: Re: solr with hadoop
>
> As Mike suggested, we use Hadoop
I think a good solution could be to use hadoop with SOLR-1301 to build solr
shards and then use solr distributed search against these shards (you will
have to copy to local from HDFS to search against them)
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-with-hadoop-tp48
Hi,
We currently have a master-slave setup for solr with two slave servers. We
are using Solrj (stream-update-solr-server) to index master slave, which
takes 6 hours to index around 15 million documents.
I would like to explore hadoop, in particularly for indexing job using
mapreduce approach.
t;[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Monday, January 7, 2008 7:14:20 PM
Subject: Re: solr with hadoop
As Mike suggested, we use Hadoop to organize our data en route to Solr.
Hadoop allows us to load balance the indexing stage, and then we use
the raw Lucene IndexWrit
Klaas <[EMAIL PROTECTED]>
Sent: Friday, January 4, 2008 3:04pm
To: solr-user@lucene.apache.org
Subject: Re: solr with hadoop
On 4-Jan-08, at 11:37 AM, Evgeniy Strokin wrote:
> I have huge index base (about 110 millions documents, 100 fields
> each). But size of the index base is reas
Evgeniy,
Two simple options:
1) take your index, put it on N Solr search servers, and put them behind a load
balancer
2) take your index, split it in N (or create N smaller indices from scratch)
and put it on N Solr search servers (and see SOLR-303)
Each will help in a different way and it soun
Mike Klaas wrote:
On 4-Jan-08, at 11:37 AM, Evgeniy Strokin wrote:
I have huge index base (about 110 millions documents, 100 fields
each). But size of the index base is reasonable, it's about 70 Gb. All
I need is increase performance, since some queries, which match big
number of documents, a
On 4-Jan-08, at 11:37 AM, Evgeniy Strokin wrote:
I have huge index base (about 110 millions documents, 100 fields
each). But size of the index base is reasonable, it's about 70 Gb.
All I need is increase performance, since some queries, which match
big number of documents, are running slow.
14 matches
Mail list logo