Hi,
As one of our requirement we need to backup Master indexes to Slave
periodically. I've been able to successfully sync the index using
"fetchIndex" command,
http://localhost:9006/solr/audit_20090828_1/replication?command=fetchindex&masterUrl=http://localhost:8080/solr/audit_20090828_1/re
There were two main reasons we went with multi-core solution,
1) We found the indexing speed starts dipping once the index grow to a
certain size - in our case around 50G. We don't optimize, but we have
to maintain a consistent index speed. The only way we could do that
was keep creating new cores
Lici,
We're doing similar thing with multi-core - when a core reaches
capacity (in our case 200 million records) we start a new core. We are
doing this via web service call (Create web service),
http://wiki.apache.org/solr/CoreAdmin
This is all done in java code - before writing we check the
achieve such this functionality? (Just
> wondering if you have used shell-scripting or you have code some 100%
> Java based solution)
>
> Thx
>
>
> 2009/8/19 Noble Paul നോബിള് नोब्ळ् :
>> On Wed, Aug 19, 2009 at 2:27 AM, vivek sar wrote:
>>> Hi,
>>>
&g
Hi,
We use multi-core setup for Solr, where new cores are added
dynamically to solr.xml. Only one core is active at a time. My
question is how can the replication be done for multi-core - so every
core is replicated on the slave?
I went over the wiki, http://wiki.apache.org/solr/SolrReplication
Hi,
Related question to "getting the latest records first". After trying
few suggested ways (function query, index time boosting) of getting
the latest first I settled for simple "sort" parameter,
sort=field+asc
As per wiki, http://wiki.apache.org/solr/SchemaDesign?highlight=(sort),
Lucen
Hi,
Does anyone know if Solr supports sorting by internal document ids,
i.e, like Sort.INDEXORDER in Lucene? If so, how?
Also, if anyone have any insight on if function query loads up unique
terms (like field sorts) in memory or not.
Thanks,
-vivek
On Fri, Jul 10, 2009 at 10:26 AM, vivek sar
evancyFAQ#head-b1b1cdedcb9cd9bfd9c994709b4d7e540359b1fd
>
> Bill
>
> On Thu, Jul 9, 2009 at 5:58 PM, vivek sar wrote:
>
>> How do we sort by internal doc id (say on one index only) using Solr?
>> I saw couple of threads saying it (Sort.INDEXORDER) was not supported
>>
stamp approach.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, July 9, 2009 1:13:54 PM
>> Subject: Re: Boosting fo
,
-vivek
On Wed, Jul 8, 2009 at 7:47 PM, Otis
Gospodnetic wrote:
>
> Sort by the internal Lucene document ID and pick the highest one. That might
> do the job for you.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Origina
Hi,
I'm trying to find a way to get the most recent entry for the
searched word. For ex., if I have a document with field name "user".
If I search for user:vivek, I want to get the document that was
indexed most recently. Two ways I could think of,
1) Sort by some time stamp field - but with mi
date
to be monitored. Any ideas?
Thanks,
-vivek
2009/6/9 Noble Paul നോബിള് नोब्ळ् :
> if you wish to intercept "read" calls ,a filter is the only way.
>
>
> On Wed, Jun 10, 2009 at 6:35 AM, vivek sar wrote:
>> Hi,
>>
>> I've to intercept every request to
Hi,
I've to intercept every request to solr (search and update) and log
some performance numbers. In order to do so I tried a Servlet filter
and added this to Solr's web.xml,
IndexFilter
com.xxx.index.filter.IndexRequestFilter
lease help me out. Can you provide
>> some specific examples that shows the way you used the create statement to
>> register new cores on the fly. Thank you .
>>
>> --KK
>>
>> On Tue, May 19, 2009 at 1:17 PM, vivek sar wrote:
>>
>>> Yeah,
data dir (in solrconfig.xml file) and create the core
> via REST call. It should work!!!
>
> Thanks & regards
> Prabhu.K
>
>
>
> vivek sar wrote:
>>
>> Hi,
>>
>> I tried the latest nightly build (04-01-09) - it takes the dataDir
>> pro
(which I'm using). Not sure if that can cause any problem. I do
use range queries for dates - would that have any effect?
Any other ideas?
Thanks,
-vivek
On Thu, May 14, 2009 at 8:38 PM, vivek sar wrote:
> Thanks Mark.
>
> I checked all the items you mentioned,
>
> 1) I'v
e Lucene term interval and raise
> it. Drop on deck searchers setting. Even then, 800 million...time to
> distribute I'd think.
>
> vivek sar wrote:
>>
>> Some update on this issue,
>>
>> 1) I attached jconsole to my app and monitored the memory usage.
>>
on what Searcher might be holding on and how can we change
that behavior?
Thanks,
-vivek
On Thu, May 14, 2009 at 11:33 AM, vivek sar wrote:
> I don't know if field type has any impact on the memory usage - does it?
>
> Our use cases require complete matches, thus there is no need
r can see them
>
> Best
> Erick
>
> On Wed, May 13, 2009 at 4:42 PM, vivek sar wrote:
>
>> Thanks Otis.
>>
>> Our use case doesn't require any sorting or faceting. I'm wondering if
>> I've configured anything wrong.
>>
>>
only 20K and you said this is a large index? That doesn't smell
> right...
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent:
at 4:01 PM, Jack Godwin wrote:
> Have you checked the maxBufferedDocs? I had to drop mine down to 1000 with
> 3 million docs.
> Jack
>
> On Wed, May 13, 2009 at 6:53 PM, vivek sar wrote:
>
>> Disabling first/new searchers did help for the initial load time, but
>&g
00 million documents and have specified 8G heap size.
Any other suggestion on what can I do to control the Solr memory consumption?
Thanks,
-vivek
On Wed, May 13, 2009 at 2:53 PM, vivek sar wrote:
> Just an update on the memory issue - might be useful for others. I
> read the foll
trick - at least the heap size is not growing as soon as Solr starts
up.
I ran some searches and they all came out fine. Index rate is also
pretty good. Would there be any impact of disabling these listeners?
Thanks,
-vivek
On Wed, May 13, 2009 at 2:12 PM, vivek sar wrote:
> Otis,
>
> In
RL, not a characteristic
> of a field. :)
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Wednesday, May 13, 2009 4:42:16 PM
&g
o trigger snapshot creation.
> 3) see 1) above
>
> 1.5 billion docs per instance where each doc is cca 1KB? I doubt that's
> going to fly. :)
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>>
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When solr starts up what does it loads up in the memory? Let's say
I've 4 cores with each core 50G in size. When Solr comes up how much
of it would be loaded in
wait until commit completes? Right now the
search doesn't return while the commit is happening.
We are using Solr 1.4 (nightly build from 3/29/09).
Thanks,
-vivek
On Wed, Apr 15, 2009 at 11:41 AM, Mark Miller wrote:
> vivek sar wrote:
>>
>> Hi,
>>
>> I've index
he index
information which is not limited by any property - is that true?
Is there any work around to limit the index size, beside limiting the
index itself?
Thanks,
-vivek
On Fri, May 8, 2009 at 10:02 PM, Shalin Shekhar Mangar
wrote:
> On Fri, May 8, 2009 at 1:30 AM, vivek sar wrote
gt; Hi,
>
> You are looking for maxMergeDocs, I believe.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, April 23, 200
Hi,
I'm using multi-core feature of Solr. Each Solr instance maintains
multiple-core - each core of size 100G. I would like to delete older
cores directory completely after 2 weeks (using file.delete).
Currently, Solr loads all the cores that are listed in solr.xml. I was
thinking of following,
is
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Tuesday, May 5, 2009 1:49:21 PM
>> Subject: Using UUID for unique key
>>
>>
Hi,
I've a distributed Solr instances. I'm using Java's UUID
(UUID.randomUUID()) to generate the unique id for my documents. Before
adding unique key I was able to commit 50K records in 15sec (pretty
constant over the growing index), after adding unique key it's taking
over 35 sec for 50k and the
Hi,
Is there any configuration to control the segments' file size in
Solr? Currently, I've an index (70G) with 80 segment files and one of
the file is 24G. We noticed that in some cases commit takes over 2
hours to complete (committing 50K records), whereas usually it
finishes in 20 seconds. Aft
writer can write to a specific
> index at a time.
>
>
> Otis --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Sunday, April 19, 2009 4:33:
Hi,
Is it possible to have two solr instances share the same solr.home?
I've two Solr instances running on the same box and I was wondering if
I can configure them to have the same solr.home. I tried it, but looks
like the second instance overwrites the first one's value in the
solr.xml (I'm usin
Hi,
I'm using the Solr 1.4 (03/29 nightly build) and when searching on a
large index (40G) I get the same exception as in this thread,
HTTP Status 500 - 13724 java.lang.ArrayIndexOutOfBoundsException:
13724 at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
at org.apache.lucene.s
Any help on this? Could this error be because of something else (not
remote streaming issue)?
Thanks.
On Wed, Apr 15, 2009 at 10:04 AM, vivek sar wrote:
> Hi,
>
> I'm trying using CSV (Solr 1.4, 03/29) for indexing following wiki
> (http://wiki.apache.org/solr/UpdateCSV).
the number of open file handles.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>> To: solr-user@lucene.apache.org
>> Sent: Friday, April 10, 2009 5:59:37 PM
>> Subject: Re: Ques
Hi,
I've index where I commit every 50K records (using Solrj). Usually
this commit takes 20sec to complete, but every now and then the commit
takes way too long - from 10 min to 30 min. I see more delays as the
index size continues to grow - once it gets over 5G I start seeing
long commit cycles
Hi,
I'm trying using CSV (Solr 1.4, 03/29) for indexing following wiki
(http://wiki.apache.org/solr/UpdateCSV). I've updated the
solrconfig.xml to have this lines,
...
When I try to upload the csv,
curl
'http://localhost:8080/solr/20090414_1/update/csv?commi
The machine's ulimit is set to 9000 and the OS has upper limit of
12000 on files. What would explain this? Has anyone tried Solr with 25
cores on the same Solr instance?
Thanks,
-vivek
2009/4/13 Noble Paul നോബിള് नोब्ळ् :
> On Tue, Apr 14, 2009 at 7:14 AM, vivek sar wrote:
>> So
. Any help is very much
appreciated.
Thanks,
-vivek
On Mon, Apr 13, 2009 at 10:52 AM, vivek sar wrote:
> Here is some more information about my setup,
>
> Solr - v1.4 (nightly build 03/29/09)
> Servlet Container - Tomcat 6.0.18
> JVM - 1.6.0 (64 bit)
> OS - Mac OS X Server 10.5.
mit and
auto-warming). As soon as update/commit/auto-warming is completed I'm
able to run my queries again. Is there anything that could stop
searching while update process is in-progress - like any lock or
something?
Any other ideas?
Thanks,
-vivek
On Mon, Apr 13, 2009 at 12:14 AM, Shalin Sh
ing.
>
> getting a decent search perf w/o autowarming is not easy .
>
> autowarmCount is an attribute of a cache .see here
> http://wiki.apache.org/solr/SolrCaching
>
> On Mon, Apr 13, 2009 at 3:32 AM, vivek sar wrote:
>> Thanks Shalin.
>>
>> I noticed couple
ency is pretty low for us, but we want to make sure that
whenever it happens it is fast enough and returns result (instead of
exception or a blank screen).
Thanks for all the help.
-vivek
On Sat, Apr 11, 2009 at 1:48 PM, Shalin Shekhar Mangar
wrote:
> On Sun, Apr 12, 2009 at 2:15 AM, vivek s
t; On Sat, Apr 11, 2009 at 3:29 AM, vivek sar wrote:
>
>> I also noticed that the Solr app has over 6000 file handles open -
>>
>> "lsof | grep solr | wc -l" - shows 6455
>>
>> I've 10 cores (using multi-core) managed by the same Solr instance.
r holding on to all the segments from all the cores - is
it because of auto-warmer?
2) How can I reduce the open file count?
3) Is there a way to stop the auto-warmer?
4) Could this be related to "Tomcat returning blank page for every request"?
Any ideas?
Thanks,
-vivek
On Fri, Apr 10, 2
Hi,
I was using CommonsHttpSolrServer for indexing, but having two
threads writing (10K batches) at the same time was throwing,
"ProtocolException: Unbuffered entity enclosing request can not be repeated. "
I switched to StreamingUpdateSolrServer (using addBeans) and I don't
see the problem a
yes - it's all new indexes. I can search them individually, but adding
"shards" throws "Connection Reset" error. Is there any way I can debug
this or any other pointers?
-vivek
On Fri, Apr 10, 2009 at 4:49 AM, Shalin Shekhar Mangar
wrote:
> On Fri, Apr 10, 2009
.java:1098)
at
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
On Thu, Apr 9, 2009 at 6:51 PM, vivek sar wrote:
> I think the reason behind the "connection reset" is. Looking at the
> code it points to QueryComponent.me
or.java:845)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:637)
On Thu, Apr 9, 2009 at 5:01 PM, vivek sar wrote:
> Hi,
>
>
Hi,
I've another thread on multi-core distributed search, but just
wanted to put a simple question here on distributed search to get some
response. I've a search query,
http://etsx19.co.com:8080/solr/20090409_9/select?q=usa -
returns with 10 result
now if I add "shards" parameter to it,
http://wiki.apache.org/solr/Solrj#head-2046bbaba3759b6efd0e33e93f5502038c01ac65
>
> I could index at the rate of 10,000 docs/sec using this and
> BinaryRequestWriter
>
> On Thu, Apr 9, 2009 at 10:36 PM, vivek sar wrote:
>> I'm inserting 10K in a batch (using addBeans method). I read some
app
needs to be running in the same jvm with Solr webapp?
Thanks,
-vivek
2009/4/9 Noble Paul നോബിള് नोब्ळ् :
> how many documents are you inserting ?
> may be you can create multiple instances of CommonshttpSolrServer and
> upload in parallel
>
>
> On Thu, Apr 9, 2009 a
rches work fine)
>>as mentioned in this thread earlier.
>>
>>Thanks,
>>-vivek
>>
>>On Wed, Apr 8, 2009 at 1:57 AM, vivek sar wrote:
>>> Thanks Fergus. I'm still having problem with multicore search.
>>>
>>> I tried the following
ava:583)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:637)
Any tips on how can I search on multicore on same solr instance?
Thanks,
-vivek
On Thu, Apr 9, 2009 at 2:56 AM, Erik Hatcher wrote:
>
> On Apr 9, 2009, at 3:00 AM, vivek sar wrote:
>>
partition the indexes on size?
Thanks,
-vivek
On Wed, Apr 8, 2009 at 11:07 AM, vivek sar wrote:
> Any help on this issue? Would distributed search on multi-core on same
> Solr instance even work? Does it has to be different Solr instances
> altogether (separate shards)?
>
>
e MultiThreadedHttpConnectionManager when creating
> the HttpClient instance?
>
> On Wed, Apr 8, 2009 at 10:13 PM, vivek sar wrote:
>
>> single thread everything works fine. Two threads are fine too for a
>> while and all the sudden problem starts happening.
>>
>> I tr
rches work fine)
as mentioned in this thread earlier.
Thanks,
-vivek
On Wed, Apr 8, 2009 at 1:57 AM, vivek sar wrote:
> Thanks Fergus. I'm still having problem with multicore search.
>
> I tried the following with two cores (they both share the same schema
> and solrconfig.xml) o
the same problem when you use a single thread?
>
> what is the version of SolrJ that you use?
>
>
>
> On Wed, Apr 8, 2009 at 1:19 PM, vivek sar wrote:
>> Hi,
>>
>> Any ideas on this issue? I ran into this again - once it starts
>> happening it keeps happenin
8080/solr/core1,10.4.x.x:8085/solr/core1&indent=true&q=vivek+japan
>>
>>I get 404 error. Is this the right URL construction for my setup? How
>>else can I do this?
>>
>>Thanks,
>>-vivek
>>
>>On Fri, Apr 3, 2009 at 1:02 PM, vivek sar wrote:
>&
at
org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:259)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:48)
at org.apache.solr.client.solrj.SolrServer.addBeans(SolrServer.java:57)
Thanks,
-vivek
On Sat, Apr 4, 2009 at 1:07 AM, vivek sar wrote:
> Hi,
>
>
solr/core1,localhost:8085/solr/core1,10.4.x.x:8080/solr/core0,10.4.x.x:8085/solr/core0,10.4.x.x:8080/solr/core1,10.4.x.x:8085/solr/core1&indent=true&q=vivek+japan
I get 404 error. Is this the right URL construction for my setup? How
else can I do this?
Thanks,
-vivek
On Fri, Apr 3, 2009
Hi,
I'm sending 15K records at once using Solrj (server.addBeans(...))
and have two threads writing to same index. One thread goes fine, but
the second thread always fails with,
org.apache.solr.client.solrj.SolrServerException:
org.apache.commons.httpclient.ProtocolException: Unbuffered entity
How often (on how many records) should I do this?
3) Should I do it programmatically or can I have it in the solrconfig.xml?
Thanks,
-vivek
On Fri, Apr 3, 2009 at 2:27 PM, vivek sar wrote:
> Just an update on this issue, the Solr did come back after 80 min - so
> not sure where was it s
m also not
running any "optimize" command. What could cause Solr to hang for 80
min?
Thanks,
-vivek
On Fri, Apr 3, 2009 at 1:55 PM, vivek sar wrote:
> Hi,
>
> I'm using Solr 1.4 (nightly build - 03/29/09). I'm stress testing my
> application with Solr. My app uses So
Hi,
I'm using Solr 1.4 (nightly build - 03/29/09). I'm stress testing my
application with Solr. My app uses Solrj to write to remote Solr (on
same box, but different JVM). The stress test sends over 2 million
records (1 record = 500 bytes, with each record having 10 fields)
within 5 minutes. All
Hi,
I've a multi-core system (one core per day), so there would be around
30 cores in a month on a box running one Solr instance. We have two
boxes running the Solr instance and input data is feeded to them in
round-robin fashion. Each box can have up to 30 cores in a month. Here
are questions,
wiki.apache.org/solr/Solrj#head-12c26b2d7806432c88b26cf66e236e9bd6e91849
>
> On Thu, Apr 2, 2009 at 4:21 AM, vivek sar wrote:
>> Hi,
>>
>> I'm using solrj (released v 1.3) to add my POJO objects
>> (server.addbeans(...)), but I'm getting this except
> On Thu, Apr 2, 2009 at 2:34 AM, vivek sar wrote:
>> Thanks Shalin.
>>
>> I added that in the solrconfig.xml, but now I get this exception,
>>
>> org.apache.solr.common.SolrException: Not Found
>> Not Found
>> request: http://localhost:8080/solr/core0/u
Hi,
I'm using solrj (released v 1.3) to add my POJO objects
(server.addbeans(...)), but I'm getting this exception,
java.lang.ClassCastException: java.lang.Long
at
org.apache.solr.common.util.NamedListCodec.unmarshal(NamedListCodec.java:89)
at
org.apache.solr.client.solrj.impl
o
contains the conf and data directories. The solr.xml has following in
it,
Am I missing anything else?
Thanks,
-vivek
On Wed, Apr 1, 2009 at 1:02 PM, Shalin Shekhar Mangar
wrote:
> On Thu, Apr 2, 2009 at 1:13 AM, vivek sar wrote:
>> Hi,
>>
>> I'm tryin
Hi,
I'm trying to add the list of POJO objects (using annotations) using
solrj, but the "server.addBeans(...) " is throwing this exception,
org.apache.solr.common.SolrException: Bad Request
Bad Request
request: http://localhost:8080/solr/core0/update?wt=javabin&version=2.2
Note, I'm using mult
directory './solr/data/index' doesn't
exist. Creating new index...
I've also tried relative paths, but to no avail.
Is this a bug?
Thanks,
-vivek
On Wed, Apr 1, 2009 at 9:45 AM, vivek sar wrote:
> Thanks Shalin.
>
> Is it available in the latest nightly build?
>
&
Mangar
wrote:
> On Wed, Apr 1, 2009 at 1:48 PM, vivek sar wrote:
>> I'm using the latest released one - Solr 1.3. The wiki says passing
>> dataDir to CREATE action (web service) should work, but that doesn't
>> seem to be working.
>>
>
> T
your approach below, but without the headache of managing
> multiple cores and index merging (not yet possible to do programatically).
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: vivek sar
>
you can pass the dataDir as an extra parameter?
>
> On Wed, Apr 1, 2009 at 7:41 AM, vivek sar wrote:
>> Hi,
>>
>> I'm trying to set up cores dynamically. I want to use the same
>> schema.xml and solrconfig.xml for all the created cores, so plan to
>> pass
Hi,
I'm trying to set up cores dynamically. I want to use the same
schema.xml and solrconfig.xml for all the created cores, so plan to
pass the same instance directory, but different dir directory. Here is
what I got in solr.xml by default (I didn't want define any core here,
but looks like we h
Hi,
As part of speeding up the index process I'm thinking of spawning
multiple threads which will write to different temporary SolrCores.
Once the index process is done I want to merge all the indexes in
temporary cores to a master core. For ex., if I want one SolrCore per
day then every index c
And the fact
>> that your heap is so small shows you are not really making use of that nice
>> ramBufferSizeMB setting. :)
>>
>> Also, use omitNorms="true" for fields that don't need norms (if their types
>> don't already do that).
>>
>
Thanks Otis. This is very useful. I'll try all your suggestions and
post my findings (and improvements).
Thanks,
-vivek
On Fri, Mar 27, 2009 at 7:08 PM, Otis Gospodnetic
wrote:
>
> Hi,
>
> Answers inlined.
>
>
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
> - Original Me
Hi,
I've index of size 50G (around 100 million documents) and growing -
around 2000 records (1 rec = 500 byes) are being written every second
continuously. If I make any search on this index I get OOM. I'm using
default cache settings (512,512,256) in the solrconfig.xml. The search
is using the
Hi,
We have a distributed Solr system (2-3 boxes with each running 2
instances of Solr and each Solr instance can write to multiple cores).
Our use case is high index volume - we can get up to 100 million
records (1 record = 500 bytes) per day, but very low query traffic
(only administrators may
Thanks again Otis. Few more questions,
1) My app currently is a stand-alone java app (not part of Solr JVM)
that simply calls update webservice on Solr (running in a separate web
container) passing 10k documents at once. In your example you
mentioned getting list of Indexers and adding document t
lives
> outside of Solr. Of course, you could come up with a "Solr Proxy" component
> that abstract some/all of this and pretends to be Solr.
>
>
> Otis --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>>
Hi,
I've used Lucene before, but new to Solr. I've gone through the
mailing list, but unable to find any clear idea on how to partition
Solr indexes. Here is what we want,
1) Be able to partition indexes by timestamp - basically partition
per day (create a new index directory every day)
2)
86 matches
Mail list logo