Jens, Yes we are doing text search.
My question to all is, the approach of creating cores for each user is a good idea? AJ On Wed, May 23, 2012 at 2:37 PM, Jens Grivolla <j+...@grivolla.net> wrote: > So are you even doing text search in Solr at all, or just using it as a > key-value store? > > If the latter, do you have your schema configured so > that only the search_id field is indexed (with a keyword tokenizer) and > everything else only stored? Also, are you sure that Solr is the best > option as a key-value store? > > Jens > > > On 05/23/2012 04:34 AM, Amit Jha wrote: > >> Hi, >> >> Thanks for your advice. It is basically a meta search application. >> Users can perform a search on N number of data sources at a time. We >> broadcast Parallel search to each selected data sources and write >> data to solr using custom build API(API and solr are deployed on >> separate machine API job is to perform parallel search, write data to >> solr ). API respond to application that some results are available >> then application fires a search query to display the results(query >> would be q=unique_search_id). And other side API keep writing data to >> solr and user can fire a search to solr to view all results. >> >> In current scenario we are using single solr server& we performing >> >> real time index and search. Performing these operations on single >> solr making process slow as index size increases. >> >> So we are planning to use multi core solr and each user will have its >> core. All core will have the same schema. >> >> Please suggest if this approach has any issues. >> >> Rgds AJ >> >> On 22-May-2012, at 20:14, Sohail Aboobaker<sabooba...@gmail.com**> >> wrote: >> >> It would help if you provide your use case. What are you indexing >>> for each user and why would you need a separate core for indexing >>> each user? How do you decide schema for each user? It might be >>> better to describe your use case and desired results. People on the >>> list will be able to advice on the best approach. >>> >>> Sohail >>> >> >> > >