There are two ways I've gotten around this issue:
1. Add replicas in the target data center after CDCR bootstrapping has
completed.
-or-
2. After the bootstrapping has completed, restart the replica nodes one-at-time
in the target data center (restart, wait for replica to catch up, then restar
I'm not sure under what conditions it will be automatically triggered, but if
you manually wanted to trigger a CDCR Bootstrap you need to issue the following
query to the leader in your target data center.
/solr//cdcr?action=BOOTSTRAP&masterUrl=
The masterUrl will look something like (change th
Is there a recommended way of managing external files with SolrCloud. At first
glance it appears that I would need to manually manage the placement of the
external_.txt file in each shard's data directory. Is there a better
way of managing this (Solr API, interface, etc?)
This message and any
re the override the scorePayload method in WKSimilarity (it is
removed from TFIDFSimilarity). I wonder what alternatives there are for mapping
strings payload to floats and use them in a tunable formula for boosting.
Thanks,
Tom Burgmans
the
hotter shards than the colder shards? It seems to add a lot of
complexity - should I just instead think that they aren't getting
queried much, so won't be using up cache space that the hot shards
will be using. Disk space is pretty cheap after all (total size for
"items" + "lists" is under 60GB).
Cheers
Tom
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:135)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.base/java.lang.Thread.run(Unknown Source)
2020-06-09 02:12:58.507 INFO (qtp90045638-16)
[c:products_20200609__CRA__NEW_CATEGORY_ROUTED_ALIAS_WAITING_FOR_DATA_TEMP
s:shard1 r:core_node2
x:products_20200609__CRA__NEW_CATEGORY_ROUTED_ALIAS_WAITING_FOR_DATA_TEMP_shard1_replica_n1]
o.a.s.c.S.Request
[products_20200609__CRA__NEW_CATEGORY_ROUTED_ALIAS_WAITING_FOR_DATA_TEMP_shard1_replica_n1]
webapp=/solr path=/update/json/docs params={} status=400 QTime=2422
Cheers
Tom
.
Looks like it might be manually setup and managed collections and
aliases for now.
Cheers
Tom
On Mon, Jun 8, 2020 at 12:43 PM Radu Gheorghe
wrote:
>
> Hi Tom,
>
> To your last two questions, I'd like to vent an alternative design: have
> dedicated "hot" and
ve 40 different ones, even with
different properties).
- the same issue applies to length normalization, lucene has a "field
length" but really no concept of document length."
Tom
On Thu, Apr 14, 2016 at 12:41 PM, David Cawley
wrote:
> Hello,
> I am developing
/package-summary.html#changingSimilarity
Has something changed between 4.1 and 5.2 that actually will prevent
changing Similarity without re-indexing from working, or is this just a
warning in case at some future point someone contributes code so that a
particular similarity takes advantage of a different index format?
Tom
thread, but they
each write to their own segments, and (I think) all the threads are in the
same Solr process),
Are we safe using locktype=single?
Tom
maybe somehow a correctly configured Solr might have multiple
processes writing to the same file.
I'm wondering if your explanation above might be added to the
documentation.
Tom
On Fri, Feb 20, 2015 at 1:25 PM, Chris Hostetter
wrote:
>
> : We are using Solr. We would not co
gment, since we gave the argument maxSegments=2. This didn't happen.
Any suggestions about how to troubleshoot this issue would be appreciated.
Tom
---
Excerpt from indexwriter log:
TMP][http-8091-Processor5]: findForcedMerges maxSegmentCount=2 ...
...
[IW][Lucene Merge Thread #0]
detection.
If you have German, a filter length of 25 might be too low (Because of
compounding). You might want to analyze a sample of your German text to
find a good length.
Tom
http://www.hathitrust.org/blogs/Large-scale-Search
On Wed, Feb 25, 2015 at 10:31 AM, Rishi Easwaran
wrote:
>
g the dread "ClassCastException Class.asSubclass(Unknown Source"
error (See below).
This is looking like a complex classloader issues. Should I put the file
somewhere else and/or declare a lib directory in solrconfig.xml?
Any suggestions on how to troubleshoot this?.
Tom
index/DocValuesType.html
Is the comment in the example schema file completely wrong, or is there
some issue with using a docValues with a multivalued StrField?
Tom Burton-West
https://www.hathitrust.org/blogslarge-scale-search
se case as Otis suggested. In our use
case sometimes this is appropriate, but we are investigating the
possibility of other methods of scoring the group based on a more flexible
function of the scores of the members (i.e scoring book based on function
of scores of chapters).
Tom Burton-West
http://www
anese character sets. For
example the config given in the JavaDocs tells it to make bigrams across 3
of the different Japanese character sets. (Is the issue related to Romaji?)
http://lucene.apache.org/core/4_7_1/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilterFactory.html
Tom
the test machine when I have time.
Tom
I suspect this is behavior as
designed. My guess is that the bigram filter figures that if there was
space in the original input (to the whole filter chain), it should not
create a bigram across it.
Tom
BTW: if you can show a few examples of Japanese queries the show the
original problem and
ity and emit 1f in
tf() is probably the best way to switch to eliminate using tf counts,
assumming that is really what you want.
Tom
On Tue, Apr 1, 2014 at 4:17 PM, Walter Underwood wrote:
> Thanks! We'll try that out and report back. I keep forgetting that I want
> to try BM25, s
the
code.)
When you set k1 to 0 it does just what you said i.e provides binary tf.
That part of the formula returns 1 if the term is present and 0 if not.
Which is I think what Wunder was trying to accomplish.
Sorry about jumping in without double checking things first.
Tom
On Fri, Apr 4
and I've been focused
on other things relating to Solr 4. , I'd love to hear any results from
someone who is testing for a batch indexing use case and has tested
various xxxDirectoryFactory implementations. Please let me know your
results if you do end up doing some testing.
Tom
On Sa
version of something like that for the INEX book
track. I'll see if I can find the code and if it is in any shape to share.
Tom
Tom Burton-West
Information Retrieval Programmer
Digital Library Production Sevice
University of Michigan Library
tburt...@umich.edu
http://www.hathitrust.org/blogs/large-sc
e.org/jira/browse/SOLR-545
Tom
s.
I'm still trying to sort out the old and new style solr.xml/core
configuration stuff. Thanks for your help.
Tom
On Wed, Feb 5, 2014 at 4:31 PM, Chris Hostetter wrote:
>
> : I then tried to locate some config somewhere that would specify that the
> : default core would be co
mers such as the Greek stemmer will
pass through any strings that don't contain characters in the Greek script.
So it might be possible to at least do stemming on some of your
languages/scripts.
I'll be very interested to learn what approach you end up using.
Tom
--
Some
score (6.2) exceeded threshold (HTML_MESSAGE,RCVD_IN_DNSWL_
LOW,SPF_NEUTRAL,URIBL_SBL
Tom Burton-West
Information Retrieval Programmer
Digital Library Production Service
University of Michigan Library
tburt...@umich.edu
http://www.hathitrust.org/blogs/large-scale-search
ment in
information retrieval (SIGIR '08). ACM, New York, NY, USA, 813-814.
DOI=10.1145/1390334.1390518 http://
doi. acm .org/10.1145/1390334.1390518
I hope this helps.
Tom
On Mon, Sep 8, 2014 at 1:33 AM, Ilia Sretenskii wrote:
> Thank you for the replies, guys!
>
> Using f
g
the termIndexInterval.
Can someone please confirm that these two parameter settings
termIndexInterval and termsIndexDivisor, do not apply to the default
PostingsFormat for Solr 4.10?
Tom
rms with the keyword attribute more
weight?
What am I missing?
Tom
-
"A repeated question is "how can I have the original term contribute
more to the score than the stemmed version"? In Solr 4.3, the
KeywordRepeatFilterFactory has
ache any
results list that contains over the queryResultMaxDocsCached?
If so, I will add a comment to the Cwiki doc and open a JIRA and submit a
patch to the example file.
Tom
.
---
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_10/solr/example/solr/coll
you would
instead use Lucene41PostingsFormat.Lucene41PostingsFormat(int, int)
<http://lucene.apache.org/core/4_10_0/core/org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.html#Lucene41PostingsFormat(int,%20int)>.
which can also be configured on a per-field basis:"
Tom
On Thu, Sep 18, 2014 at 1:42 PM, Chris Host
very large XML documents, and the examples I see all build documents
by adding fields in Java code. Is there an example that actually reads XML
files from the file system?
Tom
w to use ConcurrentUpdateSolrServer with XML
> documents
>
> I have very large XML documents, and the examples I see all build
documents
> by adding fields in Java code. Is there an example that actually reads
XML
> files from the file system?
Tom
has metadata.
I'm now thinking that for testing purposes it might be sufficient to
construct dummy documents as in the examples rather than trying to use our
actual documents. If the speed improvements look significant enough, then
I'd need to figure out how to test with real documents.
Thanks again for all the input.
Tom
ed on a per-field basis"
How can we configure Solr to use different (i.e. non-default) mimum and
maximum block sizes?
Tom
Thanks Michael and Hoss,
assuming I've written the subclass of the postings format, I need to tell
Solr to use it.
Do I just do something like:
Is there a way to set this for all fieldtypes or would that require writing
a custom CodecFactory?
Tom
On Mon, Jan 12, 2015 at 4:46 PM,
se case
for replacing a TermIndexInterval setting with changing the min and max
block size on the 41 postings format?
Tom
On Tue, Jan 13, 2015 at 3:16 PM, Chris Hostetter
wrote:
>
> : ...the nuts & bolts of it is that the PostingFormat baseclass should take
> : care of all the
ndler registered to the same name: /update ignoring:
org.apache.solr.handler.UpdateRequestHandler
Is this a bug? Is there something wrong with the out of the box example
configuration?
Tom
in indexing as well?
Does the NRTCachingDirectory have any benefit for indexing under the use
case noted above?
I'm guessing we should just use the solrStandardDirectoryFactory instead.
Is this correct?
Tom
---
Hello,
We are seeing the message "too many merges...stalling" in our indexwriter
log. Is this something to be concerned about? Does it mean we need to
tune something in our indexing configuration?
Tom
nvestigate.
Tom
On Thu, Jul 11, 2013 at 5:29 PM, Shawn Heisey wrote:
> On 7/11/2013 1:47 PM, Tom Burton-West wrote:
>
>> We are seeing the message "too many merges...stalling" in our indexwriter
>> log. Is this something to be concerned about? Does it mean we nee
requested to 100,000, I have no problems.
Does Solr have a limit on number of rows that can be requested or is this
a bug?
Tom
INFO: [core] webapp=/dev-1 path=/select
params={shards=XXX:8111/dev-1/core,XXX:8111/dev-2/core,XXX:8111/dev-3/core&fl=vol_id&indent=on&start=0&q=*:*&am
page
level, which would result in about 3 billion pages. So testing the
scalability of queries used by our current production system, such as the
query against the index that is not released to production to get a list of
the unique ids that are actually indexed in Solr is part of that testing
pro
n see the posts that the shards are
sending to the head shard and actually get a good measure of how many bytes
are being sent around.
I'll poke around and look at multipartUploadLimitInKB, and also see if
there is some servlet container limit config I might need to mess with.
Tom
On Thu,
}
hits=119220943 status=0 QTime=52952
Tom
INFO: [core] webapp=/dev-1 path=/select
params={fl=vol_id&indent=on&start=700&q=*:*&rows=100}
hits=119220943 status=0 QTime=9772
Jul 25, 2013 5:39:43 PM org.apache.solr.core.SolrCore execute
INFO: [core] webapp=/dev-1
If I am using solr.SchemaSimilarityFactory to allow different similarities
for different fields, do I set "discountOverlaps="true" on the factory or
per field?
What is the syntax? The below does not seem to work
Tom
rlaps.
Is the default for Solr 4 true?
1.2
0.75
false
On Thu, Aug 22, 2013 at 4:58 PM, Markus Jelsma
wrote:
> Hi Tom,
>
> Don't set it as attributes but as lists as Solr uses everywhere:
>
> true
>
>
> For BM25 you can also set
I should have said that I have set it both to "true" and to "false" and
restarted Solr each time and the rankings and info in the debug query
showed no change.
Does this have to be set at index time?
Tom
>
t says in the README.txt, I am
making some kind of a configuration error. I also don't understand the
workaround in SOLR-4852.
Is this an ICU issue? A java 7 issue? a Solr 4.4 issue, or did I simply
no
he-box solrconfig.xml.
According to the README.txt, all that needs to be done is create the
collection1/lib directory and put the jars there.
However, I am getting the class not found error.
Should I open another bug report or comment on the existing report?
Tom
On Tue, Aug 27, 2013 at 6:48 PM
not explain why out-of-the-box, simply creating a
collection1/lib directory and putting the jars there does not work as
documented in both the README.txt and in solrconfig.xml.
Shawn, should I add these comments to your JIRA issue?
Should I open a separate related JIRA issue?
Tom
Tom
On Tue, Aug
nk it to yours or just add
this information (i.e. other scenarios where class loading not working) to
your JIRA?
Details below:
Tom
The documentation in the collections1/conf directory is confusing. For
example the collections1/conf/solrconfig.xml file says you should put a
./lib dir in
cache
warming with your most common terms.
On the other hand as Jan pointed out, you may be cpu bound because Solr
doesn't have early termination and has to rank all 90 million docs in order
to show the top 10 or 25.
Did you try the OR search to see if your CPU is at 100%?
Tom
On Fri, Mar 22,
IndexWriterConfig.html#setTermIndexInterval%28int%29
This is followed by an example of how to set the min and max block size in
Lucene.
Is the ability to set the min and max block size available in Solr?
If not, should I open a JIRA?
Tom
--
Exceprt from the Solr 4.3 latest rev of the example/solr
the Solr
TieredMergePolicy to set the parameters:
setMaxMergeAtOnce, setSegmentsPerTier, and setMaxMergedSegmentMB?
Tom Burton-West
the Solr
TieredMergePolicy to set the parameters:
setMaxMergeAtOnce, setSegmentsPerTier, and setMaxMergedSegmentMB?
Tom Burton-West
ng to
'setMaxMergedSegmentMB' in org.apache.lucene.index.TieredMergePolicy"
Tom Burton-West
l confused about the mergeFactor=10 setting in the example
configuration. Took a quick look at the code, but I'm obviously looking in the
wrong place. Is mergeFactor=10 interpreted by TieredMergePolicy as
segmentsPerTier=10 and maxMergeAtOnce=10? If I specify values for these is
the mergeFacto
vant
documents and select the 5 or 30 facet values with the highest counts for those
relevant documents.
Is this possible or would it require writing some lucene or Solr code?
Tom Burton-West
http://www.hathitrust.org/blogs/large-scale-search
ntire result set. In my use case the
top 10,000 hits versus all 170,000.
Tom
-Original Message-
From: Lan [mailto:dung@gmail.com]
Sent: Thursday, September 29, 2011 7:40 PM
To: solr-user@lucene.apache.org
Subject: Re: Getting facet counts for 10,000 most relevant hits
I implemen
x27;ll go ahead and do some performance tests on my
kludge. That might work for us as an interim measure until I have time to dive
into the Solr/Lucene distributed faceting code.
Tom
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Friday, September 30,
-fly. That way they can search within
the document and get page level results.
More details about our setup: http://www.hathitrust.org/blogs/large-scale-search
Tom Burton-West
University of Michigan Library
www.hathitrust.org
-Original Message-
g the
"nomerge" merge policy. I hope to have some results to report on our blog
sometime in the next month or so.
Tom Burton-West
www.hathitrust.org/blogs
-Original Message-
From: kenf_nc [mailto:ken.fos...@realestate.com]
Sent: Sunday, July 18, 2010 8:18 AM
To: solr-user@lucene.apa
x27;t dug in to the code so I don't
actually know how the tii file gets loaded into a data structure in memory. If
there is api access, it seems like this might be the quickest way to get the
number of unique terms. (Of course you would have to do this for each segment).
Tom
-Origin
rter2 stemmer page:
http://snowball.tartarus.org/algorithms/english/stemmer.html
Tom Burton-West
http://www.hathitrust.org/blogs/large-scale-search
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Friday, July 30, 2010 4:42 PM
To: solr-user@lucene.apach
earch/slow-queries-and-common-words-part-2)
Tom Burton-West
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Tuesday, August 10, 2010 9:54 AM
To: solr-user@lucene.apache.org
Subject: Improve Query Time For Large Index
Hi,
I have 5 Million small documents/tweets (=&
ide to use CommonGrams
you definitely need to re-index and you also need to use both the index time
filter and the query time filter. Your index will be larger.
Tom
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Tuesday, August 10, 2010 3:32 PM
To:
ar.baz). The
debug/explain will indicate whether the parsed query is a PhraseQuery.
Tom
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Thursday, August 12, 2010 5:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Improve Query Time For Large Index
Hi Tom,
+1
I just had occasion to debug something where the interaction between the
queryparser and the analyzer produced *interesting* results. Having a separate
jsp that includes the whole chain (i.e. analyzer/tokenizer/filter and qp) would
be great!
Tom
-Original Message-
From: Michael
600,000
full-text books in each shard).
In interpreting the jmap output, can we assume that the listings for utf8
character arrays ("[C"), java.lang.String, long int arrays ("[J), and int
arrays ("[i) are all part of the data structures involved in representing the
tii
Solr is waiting on GC?
If we could get the time for each GC to take under a second, with the trade-off
being that GC would occur much more frequently, that would help us avoid the
occasional query taking more than 30 seconds at the cost of a larger number of
queries taking at least a second.
Tom
ex we
plan to use a more intelligent filter that will truncate extremely long tokens
on punctuation and we also plan to do some minimal prefiltering prior to
sending documents to Solr for indexing. However, since with now have over 400
languages , we will have to be conservative in our filtering since we would
rather index dirty OCR than risk not indexing legitimate content.
Tom
o see if we can provide you with our tii/tis
data. I'll let you know as soon as I hear anything.
Tom
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Sunday, September 12, 2010 10:48 AM
To: solr-user@lucene.apache.org; simon.willna...@gmail.com
Subject: Re: Solr
#x27;ll be testing
using termIndexInterval with Solr 1.4.1 on our test server.
Tom
-Original Message-
From: Grant Ingersoll [mailto:gsing...@apache.org]
>.What are your current GC settings? Also, I guess I'd look at ways you can
>reduce the heap size needed.
>> Cac
believe Robert Muir, who is an expert on the various problems involved and
opened Lucene-2458 is working on a better fix.
Tom Burton-West
http://www.hathitrust.org/blogs/large-scale-search
−
−
e they have
the same position, they are not turned into a phrase query.
for "l'art"
input
postion|1
token |l'art
output
postion|1|2
token |l|art
In this case there are two tokens with different positions so it treats them as
a phrase query.
Tom Burton-West
y with the *default* query operator as set in SolrConfig rather than
necessarily using the Boolean "OR" operator?
i.e. if
and autoGeneratePhraseQueries = off
then "IndexReader" -> "index" "reader" -> "index" AND "reader"
Tom
docIDs. I assume these are Java ints but the
number depends on the number of hits. Is there a good way to estimate (or
measure:) the size of this in memory?
Tom Burton-West
optimum mergeFactor somewhere between 0 (noMerge
merge policy) and 1,000. (We are also planning to raise the ramBufferSizeMB
significantly).
What experience do others have using a large mergeFactor?
Tom
small compared to our huge OCR
field. Since we construct our Solr documents programattically, I'm fairly
certain that they are always in the same order. I'll have to look at the code
when I get back to make sure.
We aren't using term vectors now, but we plan to add them as well as a number
of fields based on MARC (cataloging) metadata in the future.
Tom
query
Can Hoss or someone else point me to more detailed information on what might be
involved in the two ideas listed above?
Is somehow keeping an up-to-date map of unique Solr ids to internal Lucene ids
needed to implement this or is that a separate issue?
Tom Burton-West
http://www.hathitrust
e kind of in-memory
map after we optimize an index and before we mount it in production. In our
workflow, we update the index and optimize it before we release it and once it
is released to production there is no indexing/merging taking place on the
production index (so the internal Lucene i
Thanks Yonik,
Is this something you might have time to throw together, or an outline of what
needs to be thrown together?
Is this something that should be asked on the developer's list or discussed in
SOLR 1715 or does it make the most sense to keep the discussion in this thread?
writing the appropriate Solr filter factories? Are
there any tricky gotchas in writing such a filter?
If so, should I open a JIRA issue or two JIRA issues so the filter factories
can be contributed to the Solr code base?
Tom
es. (Unless someone beats
me to it :)
Tom
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Monday, November 01, 2010 12:49 PM
To: solr-user@lucene.apache.org
Subject: Re: Using ICUTokenizerFilter or StandardAnalyzer with UAX#29 support
from Solr
On Mon, Nov 1, 2010
on that some of them are marked as deleted
numDocs is the actual number of undeleted documents
If you run an optimize the index will be rewritten, the index size will go down
and numDocs will equal maxDocs
Tom Burton-West
-Original Message-
From: Claudio Devecchi [mailto:cdevec...@g
An optimize takes lots of cpu and I/O since it has to rewrite your indexes, so
only do it when necessary.
You can just use curl to send an optimize message to Solr when you are ready.
See:
http://wiki.apache.org/solr/UpdateXmlMessages#Passing_commit_parameters_as_part_of_the_URL
Tom
frequencies
2) Shows how to configure termvectors in Solr schema.xml to only store term
frequencies, and not positions and offsets?
Tom
ely.
I think I'm missing something here. Can someone point me to documentation
or examples?
Tom
Simplified schema.xml excerpt:
there a plan to implement coord and queryNorm?
Tom
On Mon, Dec 17, 2012 at 5:17 PM, Markus Jelsma
wrote:
> Hi Tom,
>
> The global similarity must be able to delegate similarity to your
> per-field setting. Solr has the SchemaSimilarityFactory that can do this.
> Please replace y
n and "年" as type:Single and
script: Common.
This doesn't seem right. Couldn't fit the whole analysis output on one
screen so there are two screenshots attached.
Any clues as to what is going on and whether it is a problem?
Tom
. i.e. ABC =>
searched as AB BC only AB gets highlighted even if the matching string is
ABC. (Where ABC are chinese characters such as 大亚湾 => searched as 大亚 亚湾,
but only 大亚 is highlighted rather than 大亚湾)
Is there some highlighting parameter that might fix this?
Tom Burton-West
Hello,
I'm trying to understand some Solr relevance issues using debugQuery=on,
but I don't see the coord factor listed anywhere in the explain output.
My understanding is that the coord factor is not included in either the
querynorm or the fieldnorm.
What am I missing?
Tom
e queryNorm be applied to each
result (and show up in each explain from the debugQuery?)
This is Solr 3.6.
Tom
-
ocr:aardvark
0.4395488 = (MATCH)
fieldWeight(ocr:aardvark in 504374), product of: 7.5498343 =
tf(termFreq(ocr:aardvark)=57)
Thanks Hoss,
Yes it is a distributed query.
Tom
On Fri, Jan 25, 2013 at 2:32 PM, Chris Hostetter
wrote:
>
> : I have a one term query: "ocr:aardvark" When I look at the explain
> : output, for some matches the queryNorm and fieldWeight are shown and for
> : some matche
, New York, NY, USA, 75-82.
DOI=10.1145/1571941.1571957 http://doi.acm.org/10.1145/1571941.1571957
Tom Burton-West
http://www.hathitrust.org/blogs/large-scale-search
Seems like a change in default behavior like this should be included in the
changes.txt for Solr 3.5.
Not sure how to do that.
Tom
-Original Message-
From: Naomi Dushay [mailto:ndus...@stanford.edu]
Sent: Thursday, February 23, 2012 1:57 PM
To: solr-user@lucene.apache.org
Subject
ink it
would help if the change was also noted in changes.txt.
Is it possible to revise the changes.txt for 3.5?
Do you by any chance know where the change in the default behavior was
discussed? I know it has been a contentious issue.
Tom
-Original Message-
From: Erik Hatcher [mailto:erik
ample
solrconfig was 2,147,483,647 we would never hit this limit, but I was wondering
about why it is no longer in the example.
Tom
301 - 400 of 516 matches
Mail list logo