/firstSearcher warming queries not doing an
adequate job of warming your caches which can affect performance.
Perhaps the answer could be to allocate more cache space and hence
more VM Heap space.
I hope that this helps some.
- Amit
On Thu, Oct 7, 2010 at 4:32 AM, Christos Constantinou
wrote
morelikethis, is the only
way to actually modify the MLT handler to do the de-dup? Would it make
sense to refactor the Collapse component to make it more re-usable
across other components even if I have to modify the MLT component to
use it?
Any thoughts on this would be helpful.
Thanks
Amit
ving it re-parse the configuration files).
Any help would be appreciated.
Thanks!
Amit
On Thu, Oct 7, 2010 at 10:07 AM, Amit Nithian wrote:
> I am trying to understand the multicore setup of Solr more and saw
> that SolrCore.getCore is deprecated in favor of
> CoreContainer.getCore(nam
I implemented the edge ngrams solution and it's an awesome one
compared to any other that I could think of because I can index more
than just text (other metadata) that can be used to *rank* the
autocomplete results eventually getting to rank by the probability of
selection which is, after all, wha
If you're going to validate the rows parameter, may as well validate the
start parameter too.. I've run into problems with start and rows with
ridiculously high values crash our servers.
On Thu, Nov 22, 2012 at 9:58 AM, solr-user wrote:
> Thanks guys. This is a problem with the front end not v
You can simplify your code by searching across cores in the SearchComponent:
1) public class YourComponent implements SolrCoreAware
--> Grab instance of CoreContainer and store (mCoreContainer =
core.getCoreDescriptor().getCoreContainer();)
2) In the process method:
* grab the core requested (SolrC
Why not create a new field that just contains the day component? Then you
can group by this field.
On Thu, Nov 29, 2012 at 12:38 PM, sdanzig wrote:
> I'm trying to create a SOLR query that groups/field collapses by date. I
> have a field in -MM-dd'T'HH:mm:ss'Z' format, "datetime", and I'm
ue&group.func=**rint(div(ms(date_dt),mul(24,**
> mul(60,mul(60,1000)
>
> -- Jack Krupansky
>
> -Original Message- From: Amit Nithian
> Sent: Thursday, November 29, 2012 10:29 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Grouping by a date field
>
Thanks Sandeep,
How can it done when using a database because database has all the records
old, new and updated.
On Wed, Dec 5, 2012 at 11:47 PM, Sandeep Mestry wrote:
> Hi Amit/Shanu,
>
> You can create the solr document for only the updated record and index it
> to ensure only
Hi,
How can I do this in solr4.
Amit
On Thu, Dec 6, 2012 at 1:40 PM, Markus Jelsma wrote:
> custom similarity for that field that returns 1 for
Done same thing in solr3.6 and working but in sorl3.6 filed level of
similarity is not available. And Solr4 has Similarity Factories. So I was
not getting how do I do it on solr4. Which class do i need to extend and
move ahead.
On Wed, Jan 16, 2013 at 4:44 PM, Upayavira wrote:
> For someone ver
Its all about the data data set, here I mean index. If you have documents
containing "toy" and "doll" it will return that in result set.
What I understood that you are talking about the context of the query. For
example if you search "books on MK Gandhi" and "books by MK Gandhi" both
queries h
Boost query and Boost function will suffice your purpose.
Rgds
AJ
On 16-Jan-2013, at 17:20, Dariusz Borowski wrote:
> Hi,
>
> Is it possible to define priorities on fields?
>
> Lets say I have a product table which has the following fields:
>
> - id
> - title
> - description
> - code_name
>
for this and
wanted to get feedback to see if this is an issue that others have
encountered and if so, would this help.
Thanks
Amit
Please correct my understanding,
Use one of the factory as global similarity.
And extends org.apache.lucene.search.similarities.DefaultSimilarity to create
custom sim.
And add a similarity tag in field type definition for required fields.
Or there is some other way to do that?
Rgds
AJ
On 17-
It will affect the phrase queries. That is why I am not using suggest
configuration.
On Thu, Jan 17, 2013 at 7:20 AM, Chris Hostetter
wrote:
>
> : Or there is some other way to do that?
>
> I'm late to this thread, but what was wrong with the simple suggestion of
> omitTermFreqAndPositions="true"
ltset format.
Secondly, while a new response attribute makes sense the question is
whether or not numFound is the numGroups or numTotal. To me it should be
the number of groups because logically that is what the resultset shows and
the new attribute should point to the number of total.
Thanks
Amit
ough to
simply say a full copy is needed if the slave's index version is >=
master's index version. I'll create a patch and file a bug along with a
more thorough writeup of how I got in this state.
Thanks!
Amit
On Thu, Jan 24, 2013 at 2:33 PM, Amit Nithian wrote:
> Does Solr
Okay one last note... just for closure... looks like it was addressed in
solr 4.1+ (I was looking at 4.0).
On Thu, Jan 24, 2013 at 11:14 PM, Amit Nithian wrote:
> Okay so after some debugging I found the problem. While the replication
> piece will download the index from the master serv
Add to Jack reply, Solr can also be embed into the application and can run on
same process. Solr, the server-I zation of lucene. The line is very blurred and
solr is not a very thin wrapper around lucene library.
Most solr features are distinct from lucene like
- detailed breakdown of scoring
Have you looked at the "pf" parameter for dismax handlers? pf does I think
what you are looking for which is to boost documents with the query term
exactly matching in the various fields with some phrase slop.
On Wed, Feb 13, 2013 at 2:59 AM, Hemant Verma wrote:
> Hi All
>
> I have a use case wi
Ultimately this is dependent on what your metrics for success are. For some
places it may be just raw CTR (did my click through rate increase) but for
other places it may be a function of money (either it may be gross revenue,
profits, # items sold etc). I don't know if there is a generic answer fo
m being removed. What OS are you using and is
the index/ directory stored on a local file system vs NFS?
HTH
Amit
On Tue, Feb 12, 2013 at 2:26 AM, Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
>
> Now this is strange, the index generation and index version
> is changing wi
Okay so then that should explain the generation difference of 1 between the
master and slave
On Wed, Feb 13, 2013 at 10:26 AM, Mark Miller wrote:
>
> On Feb 13, 2013, at 1:17 PM, Amit Nithian wrote:
>
> > doesn't it do a commit to force solr to recognize the changes?
>
> yes.
>
> - Mark
>
Ah yes sorry mis-understood. Another option is to use n-grams so that
"projectmanager" is a term so any query involving "project manager in india
with 2 years experience" would match higher because the query would contain
"projectmanager" as a term.
On Wed, Feb 13, 2013 at 9:56 PM, Hemant Verma w
must not be "ahead" of
the lucene one (b/c I don't control the classpath order and honestly this
shouldn't be a requirement to run a test) so it periodically bombed. This
little fix seems to have helped provided that you don't care about Lucene3x
vs Lucene40 for your tes
4.1 as this is next
on my TODO list so maybe I'll run into the same problem :-) but I wanted to
provide some info as I just recently dug through the replication code to
understand it better myself.
Cheers
Amit
On Wed, Feb 13, 2013 at 11:57 PM, Bernd Fehling <
bernd.fehl...@uni-bielefeld.de&
be added to
the next release of Solr as this is a fairly significant bug to me.
Cheers
Amit
On Thu, Feb 21, 2013 at 12:56 AM, Amit Nithian wrote:
> So the diff in generation numbers are due to the commits I believe that
> Solr does when it has the new index files but the fact that it's
at 1:24 AM, raulgrande83 wrote:
> Hi Amit,
>
> I have came across some JIRAs that may be useful in this issue:
> https://issues.apache.org/jira/browse/SOLR-4471
> https://issues.apache.org/jira/browse/SOLR-4354
> https://issues.apache.org/jira/browse/SOLR-4303
> https://i
Sounds good I am trying the combination of my patch and 4413 now to see how
it works and will have to see if I can put unit tests around them as some
of what I thought may not be true with respect to the commit generation
numbers.
For your issue above in your last post, is it possible that there w
Yeah I had a similar problem. I filed and submitted this patch:
https://issues.apache.org/jira/browse/SOLR-4310
Let me know if this is what you are looking for!
Amit
On Mon, Feb 25, 2013 at 1:50 PM, Teun Duynstee wrote:
> Ah, I see. The docs say "Although this result format does not
This is cool! I had done something similar except changing via JConsole/JMX:
https://issues.apache.org/jira/browse/SOLR-2306
We had something not as nice at Zvents but I wanted to expose these as
MBean properties so you could change them via any JMX UI like JVisualVM
Cheers!
Amit
On Mon, Feb
I need to write some tests which I hope to do tonight and then I think
it'll get into 4.2
On Tue, Feb 26, 2013 at 6:24 AM, Nicholas Ding wrote:
> Thanks Amit, that's cool! So it will also be fixed on Solr 4.2, right?
>
> On Mon, Feb 25, 2013 at 6:04 PM, Amit Nithian wrote:
I don't know a ton about SolrCloud but for our setup and my limited
understanding of it is that you start to bleed operational and
non-operational aspects together which I am not comfortable doing (i.e.
software load balancing). Also adding ZooKeeper to the mix is yet another
thing to install, setu
t said having high-availability masters requires some fairly
complicated setups and I guess I am under-estimating how
expensive/complicated our setup is relative to what you can get out of the
box with SolrCloud.
Thanks!
Amit
On Thu, Feb 28, 2013 at 6:29 PM, Erick Erickson wrote:
> Amit:
>
>
But does that mean that in SolrCloud, slave nodes are busy indexing
documents?
On Fri, Mar 1, 2013 at 5:37 AM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Amit,
>
> NRT is not possible in a master-slave setup because of the necessity
> of a hard comm
We too run a ping every 5 seconds and I think the concurrent Mark/Sweep
helps to avoid the LB from taking a box out of rotation due to long pauses.
Either that or I don't see large enough pauses for my LB to take it out
(it'd have to fail 3 times in a row or 15 seconds total before it's gone).
The
aster (in a
non-cloud environment) or setup SolrCloud either option would give you more
redundancy than copying an index to HDFS.
- Amit
On Wed, Mar 6, 2013 at 12:23 PM, Joseph Lim wrote:
> Hi Upayavira,
>
> sure, let me explain. I am setting up Nutch and SOLR in hadoop environment.
>
setting up a simple
master/slave replication scheme for that's really easy.
Cheers
Amit
On Wed, Mar 6, 2013 at 9:55 PM, Joseph Lim wrote:
> Hi Amit,
>
> so you mean that if I just want to get redundancy for solr in hdfs, the
> only best way to do it is to as per what Otis su
with-long-query.html
Thanks!
Amit
Jan
Thanks for your feedback! If possible can you file these requests on the
github page for the extension so I can work on them? They sound like great
ideas and I'll try to incorporate all of them in future releases.
Thanks
Amit
On May 11, 2012 9:57 AM, "Jan Høydahl" wrot
Erick
Yes thanks I did see that and am working on a solution to that already.
Hope to post a new revision shortly and eventually migrate to the extension
"store".
Cheers
Amit
On May 15, 2012 9:20 AM, "Erick Erickson" wrote:
> I think I put one up already, but in
e.org/solr/SolrTomcat#Installing_Solr_instances_under_Tomcat
)
4) TRy to open http://:8080/solr-example/admin or
http://:8080/solr-example.
( while accessing these pages i am geting above mentioned Error
Kindly help me out in resolving this problem
Thanks in advance.
With Regards,
Amit Handa
uot;tab" to
edit the next row but it helps a bit in that problem.
Please keep submitting issues as you encounter them and I'll address
them as best as possible. I hope that this helps everyone!
Thanks!
Amit
On Tue, May 15, 2012 at 6:20 PM, Amit Nithian wrote:
> Erick
>
> Yes th
Hi,
Thanks for your advice.
It is basically a meta search application. Users can perform a search on N
number of data sources at a time. We broadcast Parallel search to each
selected data sources and write data to solr using custom build API(API and
solr are deployed on separate machine API jo
indexed (with a keyword tokenizer) and
> everything else only stored? Also, are you sure that Solr is the best option
> as a key-value store?
>
> Jens
>
> On 05/23/2012 04:34 AM, Amit Jha wrote:
>> Hi,
>>
>> Thanks for your advice. It is basically a meta sea
Ashutosh,
Do you want to import data to solr?please explain the use case. How you are
performing a search in current scenario? And what is expected from solr?
Rgds
AJ
On 22-Jun-2012, at 15:09, "Ashutosh Puspwan" wrote:
> Dear Sir/Mam
>
> I am a beginner in apache solr. I want to search data
On 22-Jun-2012, at 11:30, Alok Bhandari wrote:
> Hello,
>
> the requirement which I have is that on solr side we have indexed data of
> multiple customers and each customer we have at least a million documents.
> After executing search end user want to sort on some fields on datagrid lets
> sa
;sort=unix-timestamp%20desc&start=0&rows=10&qt=dismax&wt=dismax&fl=*,score&hl=on&hl.snippets=1
Any pointers?
Thanks,
Amit
results only. Hoping through frange
I will be able to define the lower limit of relevance score and get better
results on date sort.
Is there any other way to do this?
Hope its clear.
- Amit
On 10-Aug-2011, at 7:52 PM, simon wrote:
> I meant the frange query, of course
>
> On We
Thanks Yonik. It solved the issue.
On 11-Aug-2011, at 6:44 PM, Yonik Seeley wrote:
> On Wed, Aug 10, 2011 at 5:57 AM, Amit Sawhney wrote:
>> Hi All,
>>
>> I am trying to sort the results on a unix timestamp using this query.
>>
>> http://url.com:8983/solr/d
re is sufficient
interest, I'll re-apply this patch to trunk and try and devise some
tests.
Thanks!
Amit
On Tue, Jul 3, 2012 at 5:08 PM, nanshi wrote:
> Jack, can you please explain this in some more detail? Such as how to write
> my own search component to modify request to add bq p
n the cache possibly increasing my cache efficiency?
I read about the lazy loading of fields which seems like a good way to
maximize the cache and gain the advantage of storing data in Solr too.
Thanks
Amit
On Sat, Jun 30, 2012 at 11:01 AM, Giovanni Gherdovich
wrote:
> Thank you François and
ello my world" may match "hello world" depending on this slop
value). The "qf" means that given a multi term query, each term exists
in the specified fields (name, description whatever text fields you
want).
Best
Amit
On Mon, Jul 2, 2012 at 9:35 AM, Chamnap Chhorn wrote:
are doing and what application architectures with
Solr look like.
Thanks!
Amit
rs doing a blank search (no text) for
something or are you returning results More Like results that were
generated as a result of a user typing some text query. I may have
built this patch assuming a blank query but I can make it work (or try
to) make it work for text based queries.
Thanks
Amit
On
your site).
Thanks again!
Amit
On Wed, Jul 4, 2012 at 1:09 AM, Paul Libbrecht wrote:
> Amit,
>
> not exactly a response to your question but doing this with a lucene index on
> i2geo.net has resulted in considerably performance boost (reading from
> stored-fields instead of reading fro
Sorry I'm a bit new to the nrt stuff in solr but I'm trying to understand
the implications of frequent commits and cache rebuilding and auto warming.
What are the best practices surrounding nrt searching and caches and query
performance.
Thanks!
Amit
lways
thought segments were an implementation detail where they get merged on
optimize etc so wouldn't that affect clients depending on segment level
stuff? Or what am I missing?
Thanks again!
Amit
On Jul 7, 2012 9:22 AM, "Andy" wrote:
> So If I want to use multi-value fac
;t setting "stored=true" on fields
that don't need it? This will increase the index size and possibly the
cache size if lazy loading isn't enabled (to be honest, this part I am
a bit unclear of since I haven't had much experience with this
myself).
Thanks
Amit
On Mon, Aug 13, 2
I think your thought about using the edge ngram as a field and
boosting that field in the qf/pf sections of the dismax handler sounds
reasonable. Why do you have qualms about it?
On Fri, Sep 7, 2012 at 12:28 PM, Kiran Jayakumar wrote:
> Hi,
>
> Is it possible to score documents with a match "earl
have the remote debug options and port setup. Eclipse can connect
fairly easily to this in the debug configuration menu.
Thanks
Amit
On Mon, Mar 26, 2012 at 4:13 AM, Erick Erickson wrote:
> Depending upon what you actually need to do, you could consider just
> attaching to the running So
I've been using
this for years and it works fairly well.
Cheers!
Amit
On Thu, May 31, 2012 at 7:01 AM, Bogdan Nicolau wrote:
> I've also tried a lot of tricks to get xpointer working with multiple child
> elements, to no success.
> In the end, I've resorted to a
nvironment, I have a
rolling restart script that bounces a set of servers when the
schema/solrconfig changes.
HTH
Amit
On Mon, Sep 10, 2012 at 11:10 PM, Abhishek tiwari
wrote:
> HI All,
>
> am having 1 master and 3 slave solr server.(verson 3.6)
> What kind of replication policy should
pay attention to the fact that getting even the "indexed"
representation of a field given a document is not fast.
Thanks
Amit
On Tue, Sep 11, 2012 at 4:03 PM, wrote:
> Hi,
>
> I have a StrField to store an URL. The field definition looks like this:
> />
>
> Type "s
program or download the patch and apply but either
way it should fix the classpath issues.
Then import the project and you can follow the remainder of the steps
in the
http://www.lucidimagination.com/developers/articles/setting-up-apache-solr-in-eclipse
article.
Cheers
Amit
On Mon, Sep
I have wondered about this too but instead why not just set your cache
sizes large enough to house most/all of your documents and pre-warm
the caches accordingly? My bet is that a large enough document cache
may suffice but that's just a guess.
- Amit
On Mon, Sep 10, 2012 at 10:56 AM,
If the fact that it's "original" vs "generic" is a field "is_original"
0/1 can you sort by is_original? Similarly, could you put a huge boost
on is_original in the dismax so that document matches on is_original
score higher than those that aren't original? Or is your goal to not
show generics *at a
"infinity"?
Thanks
Amit
f or maybe an ID where you can
re-construct the document for indexing.
There are probably other solutions too but those are the 3 that come
to my mind off hand and where I work, we use #2 with incremental index
processes that check for changes since some last known time and
indexes.
- Amit
On Tue, Sep
I think one way to do this is issue another query and set a bunch of
filter queries to restrict "interesting_facet" to just those ten
values returned in the first query.
fq=interesting_facet:1 OR interesting_facet:2 etc&q=context:
Does that help?
Amit
On Thu, Sep 27, 2012 at 6:
isn't set since the cache hit bypasses the lucene level.
I'll write up what I did and probably try and open source the work for
others to see. The stuff with PostFiltering is nice but needs some
examples and documentation.. hopefully mine will help the cause.
Thanks again
Amit
On Wed, Sep 2
Is there a maven repository location that contains the nightly build
Maven artifacts of Solr? Are SNAPSHOT releases being generated by
Jenkins or anything so that when I re-resolve the dependencies I'd get
the latest snapshot jars?
Thanks
Amit
I think you'd want to start by looking at the rb.getQuery() in the
prepare (or process if you are trying to do post-results analysis).
This returns a Query object that would contain everything in that and
I'd then look at the Javadoc to see how to traverse it. I'm sure some
runtime type-casting may
ion from the class
> org.apache.lucene.search.Query
> I can just iterate over the terms using the method extractTerms. How can I
> extract the operators?
>
> 2012/10/4 Amit Nithian
>
>> I think you'd want to start by looking at the rb.getQuery() in the
>> prepare (or
What's preventing you from using the spell checker and take the #1
result and re-issue the query from a sub-class of the query component?
It should be reasonably fast to re-execute the query from the server
side since you are already within Solr. You can modify the response to
indicate that the new
a huge concern; however, I do want to understand
why with the grouping feature enabled, this doesn't work and whether
or not it's a bug.
Any help on this would be appreciated so that my solution to this
problem is complete.
Thanks!
Amit
, I have some code and a blog post that I am going to write soon
about it. Shoot me a private note and I'll zip and send to you. I have
it as a separate component.
Thanks
Amit
On Sun, Oct 14, 2012 at 4:47 PM, Erick Erickson wrote:
> bq: is there any way to get a sum of all the scor
py to work on this patch but before I do, I
wanted to check that I am not missing something first.
Thanks
Amit
I am not sure if this repository
https://repository.apache.org/content/repositories/releases/ works but
the modification dates seem reasonable given the timing of the
release. I suspect it'll be on maven central soon (hopefully)
On Wed, Oct 17, 2012 at 11:13 PM, Grzegorz Sobczyk
wrote:
> Hello
>
ce testing against a
typical production setup *with* caching will also be done to make sure
things behave as expected.
Thanks!
Amit
What about querying on the dynamic lat/long field to see if there are
documents that do not have the dynamic _latlon0 or whatever defined?
On Fri, Oct 19, 2012 at 8:17 AM, darul wrote:
> I have already tried but get a nice exception because of this field type :
>
>
>
>
> --
> View this message in
helps
Amit
On Fri, Oct 19, 2012 at 9:37 AM, darul wrote:
> Your idea looks great but with this schema info :
>
> subFieldSuffix="_d"/>
> subFieldSuffix="_coordinate"/>
>
> .
>
>
> stored="false" />
>
> How ca
s so I can better understand?
Thanks!
Amit
where.
Thanks
Amit
On Sat, Oct 20, 2012 at 5:22 PM, Mikhail Khludnev
wrote:
> Amit,
>
> Sure. this method
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L796beside
> some other stuff calculates fq's doc
0 docs hence why it never went down this leap
frog approach in my debugging.
Next question though is what is the significance of this < 100? Is
this supposed to be a heuristic for determining the sparseness of the
filter bit set?
Thanks again
Amit
On Sat, Oct 20, 2012 at 7:12 PM, Amit Nithi
On the surface this looks like you could use the minimum should match
feature of the dismax handler and alter that behavior depending on
whether or not the search is your main search or your fallback search
as you described in your (c) case.
On Sat, Oct 20, 2012 at 1:13 AM, Uma Mahesh wrote:
> Hi
I'm not 100% sure about this but looks like update processors may help?
http://wiki.apache.org/solr/UpdateRequestProcessor
It looks like you can put in custom code to execute when certain
actions happen so sounds like this is what you are looking for.
Cheers
Amit
On Wed, Oct 24, 2012 at 8:
Since Lucene is a library there isn't much of a support for this since
in theory the client application issuing the delete could also then do
something else upon delete. solr on the other hand being a layer (a
server layer) sitting on top of lucene, it makes sense for hooks to be
configured there.
Is the goal to have the elevation data read from somewhere else? In
other words, why don't you want the elevate.xml to exist locally?
If you want to read the data from somewhere else, could you put a
dummy elevate.xml locally and subclass the QueryElevationComponent and
override the loadElevationM
missions or if you don't own the DB, check with your DBA
to find out what user you should use to access your DB.
- Amit
On Mon, Oct 29, 2012 at 9:38 PM, kunal sachdeva
wrote:
> Hi,
>
> I have tried using data-import in my local system. I was able to execute it
> properly. but whe
Hi I am trying to index using AJAX basically jquery.
Below is my code
try {
$.ajax({
type: "POST",
url: "http://myserver:8080/solr/update?commit=true";,
data: "20name=name>trailblazers",
contentType: "text/xml",
don't control access to this DB, talk
to your sys admin who does maintain this access and s/he should be
able to help resolve this.
On Tue, Oct 30, 2012 at 7:13 AM, Travis Low wrote:
> Like Amit said, this appears not to be a Solr problem. From the command
> line of your machine, try
ld have to be a union of any children core's schemas if you
are serializing a DocList out which I didn't want to have.
This is a lot simpler than mucking with the dispatch filters.
Hope this helps!
Amit
On Fri, Nov 2, 2012 at 9:45 AM, Dzmitry Petrushenka wrote:
> Hi all!
>
&g
ell enough.
If I were to restrict access to certain parts of Solr, I'd do this outside
of Solr itself and do this in a servlet or a filter, inspecting the
parameters. It's easy to create a "modifiable" parameters class and
populate that with acceptable parameters before the So
Are you trying to do this in real time or offlline? Wouldn't mining your
access logs help? It may help to have your front end application pass in
some extra parameters that are not interpreted by Solr but are there for
"stamping" purposes for log analysis. One example could be a user id or
user coo
Look at the normal ngram tokenizer. "Engine" with ngram size 3 would yield
"eng" "ngi" "gin" "ine" so a search for engi should match. You can play
around with the min/max values. Edge ngram is useful for prefix matching
but sounds like you want intra-word matching too? ("eng" should match "
Residen
I think Solr does this by default and are you executing warming queries in
the firstSearcher so that these actions are done before Solr is ready to
accept real queries?
On Thu, Nov 8, 2012 at 11:54 AM, Aaron Daubman wrote:
> Greetings,
>
> I have several custom QueryComponents that have high on
ing
the ping handler to return 200s) to avoid this problem.
Cheers
Amit
On Thu, Nov 8, 2012 at 1:33 PM, Aaron Daubman wrote:
> Amit,
>
> I am using warming /firstSearcher queries to ensure this happens before any
> external queries are received, however, unless I am misinterpreting t
quest until done) and then enable.
HTH!
Amit
On Thu, Nov 8, 2012 at 2:01 PM, Aaron Daubman wrote:
> > (plus when I deploy, my deploy script
> > runs some actual simple test queries to ensure they return before
> enabling
> > the ping handler to return 200s) to avoid this pro
101 - 200 of 256 matches
Mail list logo