How about you do indexing on a completely different node and then swap
the index into production using Solr aggregate aliases?
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection

The problem here is that deleting existing content is harder, so it is
more suitable for things like rolling log collections that are one-way
only.

Regards,
   Alex.
----
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 24 September 2015 at 18:05, Siddhartha Singh Sandhu
<sandhus...@gmail.com> wrote:
> Thank you so much.
>
> Safe to ignore the following(not a query):-
>
> *Never did this. *But how about this crazy idea:
>
> Take an Amazon EFS and share it between 2 EC2. Use one EC2 endpt to update
> the index on EFS while the other reads from it. This way each EC2 can use
> its own compute and not share its resources among-st solr threads.
>
> Regards,
> Sid.
>
> On Thu, Sep 24, 2015 at 5:17 PM, Shawn Heisey <apa...@elyograg.org> wrote:
>
>> On 9/24/2015 2:01 PM, Siddhartha Singh Sandhu wrote:
>> > I wanted to know if we can configure different ports as end points for
>> > uploading and searching API. Also, if someone could point me in the right
>> > direction.
>>
>> From our perspective, no.
>>
>> I have no idea whether it is possible at all ... it might be something
>> that a servlet container expert could figure out, or it might require
>> code changes to Solr itself.
>>
>> You probably need another mailing list specifically for the container.
>> For virtually all 5.x installs, the container is Jetty.  In earlier
>> versions, it could be any container.
>>
>> Another possibility would be putting an intelligent proxy in front of
>> Solr and having it only accept certain handler paths on certain ports,
>> then forward them to the common port on the Solr server.
>>
>> If you did manage to do this, it would require custom client code.  None
>> of the Solr clients for programming languages have a facility for
>> separate ports.
>>
>> Thanks,
>> Shawn
>>
>>

Reply via email to