boost results within 250km

2014-04-08 Thread Aman Tandon
How can i gave the more boost to the results within 250km than others without using result filtering.

create heat maps

2014-04-08 Thread Aman Tandon
How can we create the heat maps using solr spatial search? Thanks Aman Tandon

Re: OutOfMemoryError while merging large indexes

2014-04-08 Thread Haiying Wang
Thanks, Francois, Tried " -XX:-UseGCOverheadLimit" and I got real OOM error now: "java.lang.OutOfMemoryError: Java heap space". Has anyone tried merging large indexes? what was your heap size setting for Solr? Regards, Haiying From: François Schiettecatt

Re: what is geodist default value

2014-04-08 Thread david.w.smi...@gmail.com
Huh. Well if you don't want the distance, don't put it in your "fl", wether it be in the request handler or the request. It may help to know that you can specify "fl" multiple times and the field list is ultimately the set of all of them. Given that, you could avoid putting distance:geodist(

Re: what is geodist default value

2014-04-08 Thread Aman Tandon
Sir the problem is that we have too many fields in our *fl*, which i didn't mentioned in my previous mail, and we have so many products in our organization who are using our search. So in our back end java files we are handling their requirements are providing results by customizing every product

Re: OutOfMemoryError while merging large indexes

2014-04-08 Thread François Schiettecatte
Have you tried using: -XX:-UseGCOverheadLimit François On Apr 8, 2014, at 6:06 PM, Haiying Wang wrote: > Hi, > > We were trying to merge a large index (9GB, 21 million docs) into current > index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always > run into OOM e

Re: Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread KNitin
Thanks, Shawn. Adding it to all clients and servers worked On Tue, Apr 8, 2014 at 3:37 PM, KNitin wrote: > Thanks. I missed "the clients" part from doc. Will try and update the > results here > > > > > On Tue, Apr 8, 2014 at 3:27 PM, Shawn Heisey wrote: > >> On 4/8/2014 4:13 PM, KNitin wrote:

Re: Investigating performance issues in solr cloud

2014-04-08 Thread Shawn Heisey
On 4/8/2014 6:48 PM, Utkarsh Sengar wrote: > 1. I am using Oracle JVM > user@host:~$ java -version > java version "1.6.0_45" > Java(TM) SE Runtime Environment (build 1.6.0_45-b06) > Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode) That version should be very good, until you need to

Re: Investigating performance issues in solr cloud

2014-04-08 Thread Utkarsh Sengar
1. I am using Oracle JVM user@host:~$ java -version java version "1.6.0_45" Java(TM) SE Runtime Environment (build 1.6.0_45-b06) Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode) 2. I will try out jHiccup and your GC settings. 3. Yes, I am running ZK instances in an ensemble. I didn

Re: Investigating performance issues in solr cloud

2014-04-08 Thread Shawn Heisey
On 4/8/2014 6:00 PM, Utkarsh Sengar wrote: > Lots of questions indeed :) > > 1. Total virtual machines: 3 > 2. Replication factor: 0 (don't have any replicas yet) > 3. Each machine has 1 shard which has 20GB of data. So data for a > collection is spread across 3 machines totalling to 60GB > 4. Sta

Re: Investigating performance issues in solr cloud

2014-04-08 Thread Utkarsh Sengar
Lots of questions indeed :) 1. Total virtual machines: 3 2. Replication factor: 0 (don't have any replicas yet) 3. Each machine has 1 shard which has 20GB of data. So data for a collection is spread across 3 machines totalling to 60GB 4. Start solr: java -Xmx1m -javaagent:newrelic/newre

Re: Investigating performance issues in solr cloud

2014-04-08 Thread Shawn Heisey
On 4/8/2014 5:30 PM, Utkarsh Sengar wrote: > I see sudden drop in throughput once every 3-4 days. The "downtime" is for > about 2-6minutes and things stabilize after that. > > But I am not sure what is causing it the problem. > > I have 3 shards with 20GB of data on each shard. > Solr dashboard:

Investigating performance issues in solr cloud

2014-04-08 Thread Utkarsh Sengar
I see sudden drop in throughput once every 3-4 days. The "downtime" is for about 2-6minutes and things stabilize after that. But I am not sure what is causing it the problem. I have 3 shards with 20GB of data on each shard. Solr dashboard: http://i.imgur.com/6RWT2Dj.png Newrelic graphs when durin

Re: Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread KNitin
Thanks. I missed "the clients" part from doc. Will try and update the results here On Tue, Apr 8, 2014 at 3:27 PM, Shawn Heisey wrote: > On 4/8/2014 4:13 PM, KNitin wrote: > >> I have already raised the jute.buffersize to 5Mb on the zookeeper server >> side but still hitting the same problem.

Re: Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread Shawn Heisey
On 4/8/2014 4:13 PM, KNitin wrote: I have already raised the jute.buffersize to 5Mb on the zookeeper server side but still hitting the same problem. Should i make any changes on the solr server side for this (client side changes?) The jute.maxbuffer system property needs to be set on everything

Re: Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread KNitin
Thanks, Shawn I have already raised the jute.buffersize to 5Mb on the zookeeper server side but still hitting the same problem. Should i make any changes on the solr server side for this (client side changes?) On Tue, Apr 8, 2014 at 9:09 AM, Shawn Heisey wrote: > On 4/8/2014 9:48 AM, KNitin wr

OutOfMemoryError while merging large indexes

2014-04-08 Thread Haiying Wang
Hi, We were trying to merge a large index (9GB, 21 million docs) into current index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into OOM error. We currently set the max heap size to 4GB for the Solr server. We are using 4.6.0, and did not change the original solrc

Re: Range query and join, oarse exception when parens are added

2014-04-08 Thread Shawn Heisey
On 4/8/2014 1:48 PM, Mark Olsen wrote: Solr version 4.2.1 I'm having an issue using a "join" query with a range query, but only when the query is wrapped in parens. This query works: {!join from=member_profile_doc_id to=id}language_proficiency_id_number:[30 TO 50] However this query does no

Re: Stopping Solr instance

2014-04-08 Thread Shawn Heisey
On 4/8/2014 1:28 PM, abhishek jain wrote: What is the best way to stop solr from command line, the command with the stop port and secret key as given in most online help links don't work for me all time, I have to kill it most times ! i have though noted excessive swap usage when i have to kill

Delete by query with soft commit

2014-04-08 Thread youknow...@heroicefforts.net
It appears that UpdateResponse.setCommitWithin is not honored when executing a delete query against SolrCloud (SolrJ 4.6). However, setting the hard commit parameter functions as expected. Is this a known bug? Thanks, -Jess

Re: Stopping Solr instance

2014-04-08 Thread Ahmet Arslan
Hi, How do you start solr?  On Tuesday, April 8, 2014 10:31 PM, abhishek jain wrote: Hi friends, What is the best way to stop solr from command line, the command with the stop port and secret key as given in most online help links don't work for me all time, I have to kill it most times !

Re: waitForLeaderToSeeDownState when leader is down

2014-04-08 Thread Jessica Mallet
To clarify, when I said "leader" and "follower" I meant the old leader and follower before the zookeeper session expiration. When they're recovering there's no leader. On Tue, Apr 8, 2014 at 1:49 PM, Jessica Mallet wrote: > I'm playing with dropping the cluster's connections to zookeeper and th

waitForLeaderToSeeDownState when leader is down

2014-04-08 Thread Jessica Mallet
I'm playing with dropping the cluster's connections to zookeeper and then reconnecting them, and during recovery, I always see this on the leader's logs: ElectionContext.java (line 361) Waiting until we see more replicas up for shard shard1: total=2 found=1 timeoutin=139902 and then on the follow

RE: How are you handling "killer queries" with solr?

2014-04-08 Thread Toke Eskildsen
Shawn Heisey [s...@elyograg.org] wrote: > Are you using the Jetty that comes with Solr, or are you using Jetty > from another source? If you are using Jetty from another source, the > maxThreads parameter may not be high enough. I believe the default in a > typical Jetty config is 200, but the jet

Re: How to reduce the search speed of solrcloud

2014-04-08 Thread Sathya
Hi All, I found that which is taking more time. It is *server.query* SolrDataDAO dataDao = new SolrDataDAO(); QueryResponse resp = dataDao.queryData(0, 1, subject); SolrDocumentList data = resp.getResults(); System.out.println("len " + data.size()); System.out.println(); Subject is passed to he

Re: How are you handling "killer queries" with solr?

2014-04-08 Thread Sohan Kalsariya
I am using the Jetty that comes with the solr. And I am not using any third party plugins or patches. BTW what kinda error is this ? Is this related to memory issue or what make me understand please. On Tue, Apr 8, 2014 at 9:25 PM, Shawn Heisey wrote: > On 4/8/2014 3:17 AM, Sohan Kalsariya wrot

Re: Searching multivalue fields.

2014-04-08 Thread Vijay Kokatnur
Since Span is the only way to solve the problem, I won't mind re-indexing. It's just that I have never done it before. We've got 80G of indexed data replicated on two nodes in a cluster. Is there a preferred way to go about re-indexing? On Tue, Apr 8, 2014 at 12:17 AM, Ahmet Arslan wrote: >

Re: solr4 performance question

2014-04-08 Thread Erick Erickson
bq: solr.autoCommit.maxTime:60 10 true Every 100K documents or 10 minutes (whichever comes first) your current searchers will be closed and a new searcher opened, all the warmup queries etc. might happen. I suspect you're not doing much with autwarming and/or newSearcher qu

Range query and join, oarse exception when parens are added

2014-04-08 Thread Mark Olsen
Solr version 4.2.1 I'm having an issue using a "join" query with a range query, but only when the query is wrapped in parens. This query works: {!join from=member_profile_doc_id to=id}language_proficiency_id_number:[30 TO 50] However this query does not (just wrapping with parens): ({!join f

Stopping Solr instance

2014-04-08 Thread abhishek jain
Hi friends, What is the best way to stop solr from command line, the command with the stop port and secret key as given in most online help links don't work for me all time, I have to kill it most times ! i have though noted excessive swap usage when i have to kill it. Is there a link between sw

RE: solr4 performance question

2014-04-08 Thread Joshi, Shital
We don't do any soft commit. This is our hard commit setting. ${solr.autoCommit.maxTime:60} 10 true We use this update command: solr_command=$(cat

Re: solr4 performance question

2014-04-08 Thread Erick Erickson
What do you have for hour _softcommit_ settings in solrconfig.xml? I'm guessing you're using SolrJ or similar, but the solrconfig settings will trip a commit as well. For that matter ,what are all our commit settings in solrconfig.xml, both hard and soft? Best, Erick On Tue, Apr 8, 2014 at 10:28

Re: solr4 performance question

2014-04-08 Thread Furkan KAMACI
Hi Joshi; Click to the Plugins/Stats section under your collection at Solr Admin UI. You will see the cache statistics for different types of caches. hitratio and evictions are good statistics to look at first. On the other hand you should read here: https://wiki.apache.org/solr/SolrPerformanceFac

solr4 performance question

2014-04-08 Thread Joshi, Shital
Hi, We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB machine and 40 GB of index. We're constantly noticing that Solr queries take longer time while update (with commit=false setting) is in progress. The query which usually takes .5 seconds, take up to 2 minutes while up

Re: How to reduce the search speed of solrcloud

2014-04-08 Thread Sathya
Hi Anshum, I am using Solr 4.7. And i follow this tutorialto setup a solrcloud. I have only one collection in my solr. Kindly let me, if u need more details. On Fri, Apr 4, 2014 at 11:19 PM, Anshum Gupta [via Lucene] < ml-no

solr-user@lucene.apache.org

2014-04-08 Thread T. Kuro Kurosaka
I don't think & is special to the parser. Classic examples like "AT&T" just work, as far as query parser is considered. https://wiki.apache.org/solr/SolrQuerySyntax even tells that you can escape the special meaning by the backslash. "&" is special in the URL, however, and that has to be hex-esc

solr-user@lucene.apache.org

2014-04-08 Thread Peter Kirk
Thanks for the comments, and for the idea for the term query parser. This seems to work well (except I still can't get '&' in a category name to work - I can get the (one and only) customer to change the category names). I'll look into fixing the indexing side of things - could be an idea to stri

Re: Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread Shawn Heisey
On 4/8/2014 9:48 AM, KNitin wrote: I am running solr cloud 4.3.1 (there is a plan to upgrade to later versions but that would take a few months). I noticed a very peculiar solr behavior in solr that beyond *2496* cores I am unable to create any more collections due to this error *Could not get

Re: Duplicate Unique Key

2014-04-08 Thread Simon
MergingIndex is not the case here as I am not doing that. Even the issue is gone for now, it is not a relief for me as I am not sure how to explain this to others (peer, boss and user). I am thinking of implement a watch dog to check whenever the total Solr documents exceeds the number of items i

Re: How are you handling "killer queries" with solr?

2014-04-08 Thread Shawn Heisey
On 4/8/2014 3:17 AM, Sohan Kalsariya wrote: I am using apache solr-4.6.1 and solr works fine when the number of requests are less *But when the number of concurrent requests are more Solr is not able to handle it and it gives the following errors on server.* 834246 [qtp1797259051-168] WARN or

Cannot get shard id error - Hitting limits on creating collections

2014-04-08 Thread KNitin
Hi I am running solr cloud 4.3.1 (there is a plan to upgrade to later versions but that would take a few months). I noticed a very peculiar solr behavior in solr that beyond *2496* cores I am unable to create any more collections due to this error *Could not get shard id for core.* I also n

Re: Ranking code

2014-04-08 Thread Shawn Heisey
On 4/8/2014 3:55 AM, azhar2007 wrote: Im basically trying to understand how results are ranked. Whats the algorithm behind it If you add a debugQuery parameter to your request, set to true, you will see the score calculation for every document included in the response. This is the default s

Re: Error handling in Solr.

2014-04-08 Thread Alexandre Rafalovitch
Which version of Solr? Could be all those admin extra files. On 08/04/2014 4:36 pm, "abhishek jain" wrote: > hi friends, > While browsing through the logs of solr,i noticed a few null pointer > exceptions, i am concerned what could be the reason? > > > ERROR org.apache.solr.core.SolrCore â EURO

Solr ExtractingRequestHandler XPath

2014-04-08 Thread Lucas .
Hi, I'm trying to use ExtractingRequestHandler with XPath parameter but this doesnt work me for -> http://wiki.apache.org/solr/ExtractingRequestHandler#XPath with this &xpath=/xhtml:html/xhtml:body/descendant:node() it's seem to work, but when i try with something like this /xhtml:html/xhtml:b

solr-user@lucene.apache.org

2014-04-08 Thread Erick Erickson
I'd seriously consider filtering these characters out when you index and search, this is quite likely very brittle. The same item, say from two different vendors, might have D (E & F) or D E & F. If you just stripped all of the non alpha-num characters you'd likely get less brittle results. You kn

Re: Error handling in Solr.

2014-04-08 Thread Erick Erickson
It looks like someone is asking to read a file from your conf directory and the file isn't there. What is the URL associated with this error? This is probably not something to be concerned about since it isn't related to Solr running, no more disturbing than throwing a syntax error. That said, I'd

Re: Duplicate Unique Key

2014-04-08 Thread Erick Erickson
Right, this is expected behavior. The real problem isn't data loss, but how do you know which doc should "win"? Merging indexes is for a rather narrowly-defined use-case, it was never intended to remove duplicates. Best, Erick On Tue, Apr 8, 2014 at 12:36 AM, Cihad Guzel wrote: > Hi. > > I have

Re: Commit Within and /update/extract handler

2014-04-08 Thread Erick Erickson
Got a clue how it's being generated? Because it's not going to show you documents. commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} openSearcher=false and softCommit=false so the documents will be invisible. You need one or the

solr-user@lucene.apache.org

2014-04-08 Thread Ahmet Arslan
Hi Peter, TermQueryParser is useful in your case.  q={!term f=categories_string}A|B|D (E & F) On Tuesday, April 8, 2014 4:37 PM, Peter Kirk wrote: Hi How to search for Solr special characters like '(' and '&'? I am trying to execute searches for "products" in my Solr (3.6.1) index, based on

solr-user@lucene.apache.org

2014-04-08 Thread Peter Kirk
Hi How to search for Solr special characters like '(' and '&'? I am trying to execute searches for "products" in my Solr (3.6.1) index, based on the "categories" to which these products belong. The categories are stored in a multistring field for the products, and are hierarchical, and are fed

Re: what is geodist default value

2014-04-08 Thread david.w.smi...@gmail.com
You're computing the distance from the locations you've put in your index to 0,0. Why 0,0? Wouldn't you want to provide a point at query time? On Tue, Apr 8, 2014 at 7:41 AM, Aman Tandon wrote: > In this case of *query 2* as mentioned in previous mail, there will be > distance calculation usi

Re: Strange relevance scoring

2014-04-08 Thread Aman Tandon
yes david you must use the "omitNorms=true" for great performance Thanks Aman Tandon On Tue, Apr 8, 2014 at 5:36 PM, Ahmet Arslan wrote: > Hi David, > > omitNorms="true" will cause additional performance gains too. > https://wiki.apache.org/solr/SolrPerformanceFactors#indexed_fields > > To glo

Re: Strange relevance scoring

2014-04-08 Thread Ahmet Arslan
Hi David, omitNorms="true" will cause additional performance gains too.  https://wiki.apache.org/solr/SolrPerformanceFactors#indexed_fields To globally disable length norm, one can create a custom similarity and register it as a default similarity though.  On Tuesday, April 8, 2014 2:59 PM, D

Re: Strange relevance scoring

2014-04-08 Thread David Santamauro
Is there any general setting that removes this "punishment" or must omitNorms=false be part of every field definition? On 4/8/2014 7:04 AM, Ahmet Arslan wrote: Hi, length normal is computed for every document at index time. I think it is 1/sqrt(number of terms). Please see section 6. norm(

Re: what is geodist default value

2014-04-08 Thread Aman Tandon
In this case of *query 2* as mentioned in previous mail, there will be distance calculation using *distance:geodist(0,0,**latlon)* as it will take the default lat and lon values , so how it can return the variable distance. With Regards Aman Tandon On Tue, Apr 8, 2014 at 5:06 PM, Aman Tandon wr

Re: what is geodist default value

2014-04-08 Thread Aman Tandon
Hi Sir, *Scenario*: I have to return the distances for every city search so I make these configurations as described below. *solrconfig.xml: * My request handler is "im.search" and its defaults are none json 20

Re: Strange relevance scoring

2014-04-08 Thread Ahmet Arslan
Hi, length normal is computed for every document at index time. I think it is 1/sqrt(number of terms). Please see section 6. norm(t,d) at https://lucene.apache.org/core/4_7_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html If you don't care about length normalisation, you can s

Re: Strange relevance scoring

2014-04-08 Thread John Nielsen
Hi, I couldn't find any occurrence of SpanFirstQuery in either the schema.xml or solrconfig.xml files. This is the query i used with debug=results. http://pastebin.com/bWzUkjKz And here is the answer. http://pastebin.com/nCXFcuky I am not sure what I am supposed to be looking for. On Tue, A

Re: Strange relevance scoring

2014-04-08 Thread John Nielsen
Interesting. Most of the text fields are single word fields or close to it, but on some of the documents, long text appears. How long does a text need to be before hitting length normalization? On Tue, Apr 8, 2014 at 11:36 AM, Ahmet Arslan wrote: > Hi Nielsen, > > There is no special attentio

Re: Regex For *|* at hl.regex.pattern

2014-04-08 Thread Furkan KAMACI
Hi Jack; My sentence delimiter is not one character; it is *|* How to write a regex for it? 2014-04-08 8:06 GMT+03:00 Jack Krupansky : > The regex pattern should match the text of the fragment. IOW, exclude > whatever delimiters are not allowed in the fragment. > > The default is: > > [-\w ,\n

Re: Ranking code

2014-04-08 Thread azhar2007
Im basically trying to understand how results are ranked. Whats the algorithm behind it --- Original Message --- From: "Shawn Heisey-4 [via Lucene]" Sent: 7 April 2014 19:24 To: "azhar2007" Subject: Re: Ranking code On 4/7/2014 10:29 AM, azhar2007 wrote: > Hi does anybody know where the ran

anyone besides Solr also using Elasticsearch?

2014-04-08 Thread Bernd Fehling
Hi list, as the title says, is anyone besides Solr also using Elasticsearch? If so, are you: - using JSON for ES search queries? - using the sparse URI search of ES for search queries? - having your own addon/plugin for turning Solr URI queries into JSON quries for ES? - having any other combina

Re: Strange relevance scoring

2014-04-08 Thread Ahmet Arslan
Hi Nielsen, There is no special attention paid to first word. You are probably hitting length normalisation.  Lucene/Solr punishes long documents, favours short documents.  (5 times appearing one) longer? On Tuesday, April 8, 2014 12:03 PM, John Nielsen wrote: Hi, We are seeing a strange phe

Error handling in Solr.

2014-04-08 Thread abhishek jain
hi friends, While browsing through the logs of solr,i noticed a few null pointer exceptions, i am concerned what could be the reason? ERROR org.apache.solr.core.SolrCore â EURO " java.lang.NullPointerException at org.apache.solr.handler.admin.ShowFileRequestHandler.showFromFileSystem(Sh

RE: Strange relevance scoring

2014-04-08 Thread Markus Jelsma
Hi - the thing you describe is possible when your set up uses SpanFirstQuery. But to be sure what's going on you should post the debug output. -Original message- > From:John Nielsen > Sent: Tuesday 8th April 2014 11:03 > To: solr-user@lucene.apache.org > Subject: Strange relevance scor

Re: AW: AW: auto completion search with solr using NGrams in SOLR

2014-04-08 Thread atpatil11
Hi I have done the same changes as you told & changed the code with my fields name. However I'm getting following error. I even reverted edited code but still its throwing same error. We're having Solr 4.6. When i restart the solr it says solr (pid 4610) already running. SolrCore Initialization Fa

How are you handling "killer queries" with solr?

2014-04-08 Thread Sohan Kalsariya
I am using apache solr-4.6.1 and solr works fine when the number of requests are less *But when the number of concurrent requests are more Solr is not able to handle it and it gives the following errors on server.* 834246 [qtp1797259051-168] WARN org.eclipse.jetty.servlet.ServletHandler - /solr

Strange relevance scoring

2014-04-08 Thread John Nielsen
Hi, We are seeing a strange phenomenon with our Solr setup which I have been unable to answer. My Google-fu is clearly not up to the task, so I am trying here. It appears that if i do a freetext search for a single word, say "modellering" on a text field, the scoring is massively boosted if the

MapReduceIndexerTool does not respect Lucene version in solrconfig Was: converting 4.7 index to 4.3.1

2014-04-08 Thread Dmitry Kan
Hello, When we instantiate the MapReduceIndexerTool with the collections' conf directory, we expect, that the Lucene version is respected and the index gets generated in a format compatible with the defined version. This does not seem to happen, however. Checking with luke: the expected Lucene

SOLR Remote Search - Best Practice

2014-04-08 Thread Bernhard Prange
Hi all, I have an issue with a JSON related query. An Ajax searchform that should look up on another location for the JSON Query. The Domain where the data should be pulled from with: http://mydomain.com/projects/solr_server/?q=searchword That totally works if I type it in the browser. Now I

Re: Duplicate Unique Key

2014-04-08 Thread Cihad Guzel
Hi. I have encountered a similar situation when I tested solr merge index . ( http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201403.mbox/%3CCAMrn6cOVWohxooRzZ8NmwYQUda2GW+gYD+edvC_b_kGT=f4...@mail.gmail.com%3E ) I have had duplicates. But the duplicates are gone when I post same data

Re: Searching multivalue fields.

2014-04-08 Thread Ahmet Arslan
Hi, Changing value of omitTermFreqAndPositions requires re-indexing, unfortunately. And I remembered that you don't want to reindex. It looks like we are out of options. Ahmet On Tuesday, April 8, 2014 12:45 AM, Vijay Kokatnur wrote: Yes I did restart solr, but did not re-index.  Is that