hi all,
I am using spellcheck in solr 1.4. I found that spell check is not
implemented as SolrCore. in SolrCore, it uses reference count to track
current searcher. oldSearcher and newSearcher will both exist if oldSearcher
is servicing some query. But in FileBasedSpellChecker
public void bu
for indexing, your can make use of multi cores easily by call
IndexWriter.addDocument with multi-threads
as far as I know, for searching, if there is only one request, you can't
make good use of cpus.
On Sat, Oct 15, 2011 at 9:37 PM, Rob Brown wrote:
> Hi,
>
> I'm running Solr on a machine with
we have implemented one supporting "did you mean" and preffix suggestion
for Chinese. But we base our working on solr 1.4 and we did many
modifications so it will cost time to integrate it to current solr/lucene.
Here are our solution. glad to see any advices.
1. offline words and p
modify catalina.sh(bat)
adding java startup params:
-Dsolr.solr.home=/your/path
On Mon, Oct 31, 2011 at 8:30 PM, 刘浪 wrote:
> Hi,
> After I start tomcat, I input http://localhost:8080/solr/admin. It
> can display. But in the tomcat, I find an exception like "Can't find
> resource 'solrconfig
set JAVA_OPTS=%JAVA_OPTS% -Dsolr.solr.home=c:\xxx
On Mon, Oct 31, 2011 at 9:14 PM, 刘浪 wrote:
> Hi Li Li,
>I don't know where I should add in catalina.bat. I have know Linux
> how to do it, but my OS is windows.
>Thank you very much.
>
> Sincerely,
> A
it says "Either filter or filterList may be set in the QueryCommand,
but not both." I am newbie of solr and have no idea of the exception.
What's wrong with it? thank you.
java.lang.IllegalArgumentException: Either filter or filterList may be
set in the QueryCommand, but not both.
at
org
I don't know because it's patched by someone else but I can't get his
help. When this component become a contrib? Using patch is so annoying
2010/6/22 Martijn v Groningen :
> What version of Solr and which patch are you using?
>
> On 21 June 2010 11:46, Li Li wrote:
>&
I want to integrate document's timestamp into scoring of search. And I
find an example in the book "Solr 1.4 Enterprise Search Server" about
function query. I want to boost a document which is newer. so it may
be a function such as 1/(timestamp+1) . But the function query is
added to the final resu
I want to delete all index and rebuild index frequently. I can't
delete the index files directly because I want to use replication
the index file is ill-formated because disk full when feeding. Can I
roll back to last version? Is there any method to avoid unexpected
errors when indexing? attachments are my segment_N
I used to store full text into lucene index. But I found it's very
slow when merging index because when merging 2 segments it copy the
fdt files into a new one. So I want to only index full text. But When
searching I need the full text for applications such as hightlight and
view full text. I can s
I use SegmentInfos to read the segment_N file and found the error is
that it try to load deletedDocs but the .del file's size is 0(because
of disk error) . So I use SegmentInfos to set delGen=-1 to ignore
deleted Docs.
But I think there is some bug. The logic of write my be -- it first
writes the
Is there any tools for "Distributed Indexing"? It refers to
KattaIntegration and ZooKeeperIntegration in
http://wiki.apache.org/solr/DistributedSearch.
But it seems that they concern more on error processing and
replication. I need a dispatcher that dispatch different docs by
uniqueKey(suc
When I add some docs by
post.jar(org.apache.solr.util.SimplePostTool), It commits after all
docs are added. It will call IndexWriter.commit(). And a new segment
will be added and sometimes it triggers segment merging. New index
files will be generated(frm, tii,tis, ). Old segments will be
d
I want to cache full text into memory to improve performance.
Full text is only used to highlight in my application(But it's very
time consuming, My avg query time is about 250ms, I guess it will cost
about 50ms if I just get top 10 full text. Things get worse when get
more full text because i
estopensource.com
>
>
> On Wed, Jul 14, 2010 at 12:08 PM, Li Li wrote:
>
>> I want to cache full text into memory to improve performance.
>> Full text is only used to highlight in my application(But it's very
>> time consuming, My avg query time is about 250ms, I
provided you two options. Since you already store as part of the
> index, You could try external caching. Try using ehcache / Membase
> http://www.findbestopensource.com/tagged/distributed-caching . The caching
> system will do LRU and is much more efficient.
>
> On Wed, Jul 14, 2010 at
I want to load full text into an external cache, So I added so codes
in newSearcher where I found the warm up takes place. I add my codes
before solr warm up which is configed in solrconfig.xml like this:
...
public void newSearcher(SolrIndexSearcher newSearcher,
Sol
numDocs is the total indexed docs. May be your docs have duplicated
key. When duplicated, the older one will be deleted. uniqueKey is
defined in solrconfig.xml
2010/7/16 Karthik K :
> Hi,
> Is numDocs in solr statistics equal to the total number of documents that
> are searchable on solr? I find t
I have considerd this problem and tried to solve it using 2 methods
By these methods, we also can boost a doc by the relative positions of
query terms.
1: add term Position when indexing
modify TermScorer.score
public float score() {
assert doc != -1;
int f = freqs[pointer];
floa
in QueryComponent.mergeIds. It will remove document which has
duplicated uniqueKey with others. In current implementation, it use
the first encountered.
String prevShard = uniqueDoc.put(id, srsp.getShard());
if (prevShard != null) {
// duplicate detected
But users will think there is something wrong with it when he/she
search the same query but got different result.
2010/7/21 MitchK :
>
> Li Li,
>
> this is the intended behaviour, not a bug.
> Otherwise you could get back the same record in a response for several
> times
yes. This will make user think our search engine has some bug.
from the comments of the codes, it needs more things to do
if (prevShard != null) {
// For now, just always use the first encountered since we
can't currently
// remove the previous one added to the pri
I think what Siva mean is that when there are docs with the same url,
leave the doc whose score is large.
This is the right solution.
But itshows a problem of distrubted search without common idf. A doc
will get different score in different shard.
2010/7/22 MitchK :
>
> It already was sorted by sco
it's not a related topic in solr. maybe you should read some papers
about wrapper generation or automatical web data extraction. If you
want to generate xpath, you could possibly read liubing's papers such
as "Structured Data Extraction from the Web based on Partial Tree
Alignment". Besides dom tre
where is the link of this patch?
2010/7/24 Yonik Seeley :
> On Fri, Jul 23, 2010 at 2:23 PM, MitchK wrote:
>> why do we do not send the output of TermsComponent of every node in the
>> cluster to a Hadoop instance?
>> Since TermsComponent does the map-part of the map-reduce concept, Hadoop
>> onl
the solr version I used is 1.4
2010/7/26 Li Li :
> where is the link of this patch?
>
> 2010/7/24 Yonik Seeley :
>> On Fri, Jul 23, 2010 at 2:23 PM, MitchK wrote:
>>> why do we do not send the output of TermsComponent of every node in the
>>> cluste
I uses format like -MM-ddThh:mm:ssZ. it works
2010/7/26 Rafal Bluszcz Zawadzki :
> Hi,
>
> I am using Data Import Handler from Solr 1.4.
>
> Parts of my data-config.xml are:
>
>
> processor="XPathEntityProcessor"
> stream="false"
> forEach="
I want a cache to cache all result of a query(all steps including
collapse, highlight and facet). I read
http://wiki.apache.org/solr/SolrCaching, but can't find a global
cache. Maybe I can use external cache to store key-value. Is there any
one in solr?
I faced this problem but can't find any good solution. But if you have
large stored field such as full text of document. If you don't store
it in lucene, it will be quicker because 2 merge indexes will force
copy all fdts into a new fdt. If you store it externally. The problem
you have to face is h
hightlight's time is mainly spent on getting the field which you want
to highlight and tokenize this field(If you don't store term vector) .
you can check what's wrong,
2010/7/30 Peter Spam :
> If I don't do highlighting, it's really fast. Optimize has no effect.
>
> -Peter
>
> On Jul 29, 2010, a
hi all
in lucene, we can only store tf of a term's invert list. in my
application, I only provide dismax query with boolean query and don't
support queries which need position info such as phrase query. So I
don't want to store position info in prx file. How to turn off it? And
if I turn off i
current implementation of distributed search use unique key in the
STAGE_EXECUTE_QUERY stage.
public int distributedProcess(ResponseBuilder rb) throws IOException {
...
if (rb.stage == ResponseBuilder.STAGE_EXECUTE_QUERY) {
createMainQuery(rb);
return ResponseBuilder.STAGE_
do you mean content based image retrieval or just search images by tag?
if the former, you can try LIRE
2010/9/15 Shawn Heisey :
> My index consists of metadata for a collection of 45 million objects, most
> of which are digital images. The executives have fallen in love with
> Google's color im
It seems there is a SimilarLikeThis in lucene . I don't know whether a
counterpart in solr. It just use the found document as a query to find
similar documents. Or you just use boolean or query and similar
questions with getting higher score. Of course, you can analyse the
question using some NLP t
hi all
I want to speed up search time for my application. In a query, the
time is largly used in reading postlist(io with frq files) and
calculate scores and collect result(cpu, with Priority Queue). IO is
hardly optimized or already part optimized by nio. So I want to use
multithreads to utili
yes, there is a multisearcher in lucene. but it's idf in 2 indexes are
not global. maybe I can modify it and also the index like:
term1 df=5 doc1 doc3 doc5
term1 df=5 doc2 doc4
2010/9/28 Li Li :
> hi all
> I want to speed up search time for my application. In a query, th
hi all,
I want to know the detail of IndexReader in SolrCore. I read a
little codes of SolrCore. Here is my understanding, are they correct?
Each SolrCore has many SolrIndexSearcher and keeps them in
_searchers. and _searcher keep trace of the latest version of index.
Each SolrIndexSearcher
will one user search other user's index?
if not, you can use multi cores.
2010/10/11 Tharindu Mathew :
> Hi everyone,
>
> I'm using solr to integrate search into my web app.
>
> I have a bunch of users who would have to be given their own individual
> indexes.
>
> I'm wondering whether I'd have to
is there anyone could help me?
2010/10/11 Li Li :
> hi all,
> I want to know the detail of IndexReader in SolrCore. I read a
> little codes of SolrCore. Here is my understanding, are they correct?
> Each SolrCore has many SolrIndexSearcher and keeps them in
> _searchers. and
I don't think current lucene will offer what you want now.
There are 2 main tasks in a search process.
One is "understanding" users' intension. Because natural language
understanding is difficult, Current Information Retrival systems
"force" users input some terms to express their needs
hi all,
we have used solr to provide searching service in many products. I
found for each product, we have to do some configurations and query
expressions.
our users are not used to this. they are familiar with sql and they may
describe like this: I want a query that can search books whose
you can convert Chinese words to pinyin and use n-gram to search phonetic
similar words
On Wed, Feb 8, 2012 at 11:10 AM, Floyd Wu wrote:
> Hi there,
>
> Does anyone here ever implemented phonetic search especially with
> Chinese(traditional/simplified) using SOLR or Lucene?
>
> Please share some
Commit is called
after adding each document
you should add enough documents and then calling a commit. commit is a
cost operation.
if you want to get latest feeded documents, you could use NRT
On Tue, Feb 14, 2012 at 12:47 AM, Huy Le wrote:
> Hi,
>
> I am using solr 3.5. I seeing solr keep
ts be available after adding to the index.
>
> What I don't understand is why new segment files are created so often.
> Are the commit calls triggering new segment files being created? I don't
> see this behavior in another environment of the same version of solr.
>
>
available after adding to the index.
>
> What I don't understand is why new segment files are created so often.
> Are the commit calls triggering new segment files being created? I don't
> see this behavior in another environment of the same version of solr.
>
>
method1, dumping data
for stored fields, you can traverse the whole index and save it to
somewhere else.
for indexed but not stored fields, it may be more difficult.
if the indexed and not stored field is not analyzed(fields such as id),
it's easy to get from FieldCache.StringIndex.
But for
for method 2, delete is wrong. we can't delete terms.
you also should hack with the tii and tis file.
On Tue, Feb 14, 2012 at 2:46 PM, Li Li wrote:
> method1, dumping data
> for stored fields, you can traverse the whole index and save it to
> somewhere else.
> for index
nd Terms(...) it might work.
>
> Something like:
>
> HashSet ignoredTerms=...;
>
> FilteringIndexReader wrapper=new FilterIndexReader(reader);
>
> SegmentMerger merger=new SegmentMerger(writer);
>
> merger.add(wrapper);
>
> merger.Merge();
>
>
>
>
>
w have a shrunk index with specified terms removed.
>
> Implementation uses separate thread for each segment, so it re-writes
> them in parallel. Took about 15 minutes to do 770,000 doc index on my
> macbook.
>
>
> On Tue, Feb 14, 2012 at 10:12 PM, Li Li wrote:
> > I have rough
you can fool the lucene scoring fuction. override each function such as idf
queryNorm lengthNorm and let them simply return 1.0f.
I don't lucene 4 will expose more details. but for 2.x/3.x, lucene can only
score by vector space model and the formula can't be replaced by users.
On Fri, Feb 17, 2012
lucene will never modify old segment files, it just flushes into a new
segment or merges old segments into new one. after merging, old segments
will be deleted.
once a file(such as fdt and fdx) is generated. it will never be
re-generated. the only possible is that in the generating stage, there is
what do u mean "programmatically"? modify codes of solr? becuase solr is
not like lucene, it only provide http interfaces for its users other than
java api
if you want to modify solr, you can find codes in SolrCore
private final LinkedList> _searchers = new
LinkedList>();
and _searcher is current
optimize will generate new segments and delete old ones. if your master
also provides searching service during indexing, the old files may be
opened by old SolrIndexSearcher. they will be deleted later. So when
indexing, the index size may double. But a moment later, old indexes will
be deleted.
it should be indexed by not analyzed. it don't need stored.
reading field values from stored fields is extremely slow.
So lucene will use StringIndex to read fields for sort. so if you want to
sort by some field, you should index this field and don't analyze it.
On Wed, Mar 14, 2012 at 6:43 PM, Fi
There is a class org.apache.solr.common.util.XML in solr
you can use this wrapper:
public static String escapeXml(String s) throws IOException{
StringWriter sw=new StringWriter();
XML.escapeCharData(s, sw);
return sw.getBuffer().toString();
}
On Wed, Mar 14, 2012 at
no, it's nothing to do with schema.xml
post.jar just post a file, it don't parse this file.
solr will use xml parser to parse this file. if you don't escape special
characters, it's not a valid xml file and solr will throw exceptions.
On Thu, Mar 15, 2012 at 12:33 AM, neosky wrote:
> Thanks!
> D
how many memory are allocated to JVM?
On Thu, Mar 15, 2012 at 1:27 PM, Husain, Yavar wrote:
> Solr is giving out of memory exception. Full Indexing was completed fine.
> Later while searching maybe when it tries to load the results in memory it
> starts giving this exception. Though with the sam
ver with exactly same system and solr configuration &
> memory it is working fine?
>
>
> -Original Message-
> From: Li Li [mailto:fancye...@gmail.com]
> Sent: Thursday, March 15, 2012 11:11 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr out of memory excep
ag solved a real problem we were
having. Whoever wrote the JRocket book you refer to no doubt had other
scenarios in mind...
On Thu, Mar 15, 2012 at 3:02 PM, C.Yunqin <345804...@qq.com> wrote:
> why should enable pointer compression?
>
>
>
>
> -- Original -
it's not the right place.
when you use java -Durl=http://... -jar post.jar data.xml
the data.xml file must be a valid xml file. you shoud escape special chars
in this file.
I don't know how you generate this file.
if you use java program(or other scripts) to generate this file, you should
use xml t
here is my method.
1. check out latest source codes from trunk or download tar ball
svn checkout http://svn.apache.org/repos/asf/lucene/dev/trunklucene_trunk
2. create a dynamic web project in eclipse and close it.
for example, I create a project name lucene-solr-trunk in my
workspace.
gt;> Classpath entry /solr3_5/ssrc/solr/lib/easymock-2.2.jar will not be
>> exported or published. Runtime ClassNotFoundExceptions may result.
>> solr3_5P/solr3_5Classpath Dependency Validator Message
>> Classpath entry
>> /solr3_5/ssrc/solr/lib/geronimo-stax
it's not possible now because lucene don't support this.
when doing disjunction query, it only record how many terms match this
document.
I think this is a common requirement for many users.
I suggest lucene should divide scorer to a matcher and a scorer.
the matcher just return which doc is matche
houldMatch parameter'. Also
> norms can be used as a source for dynamics mm values.
>
> Wdyt?
>
> On Wed, Apr 11, 2012 at 10:08 AM, Li Li wrote:
>
> > it's not possible now because lucene don't support this.
> > when doing disjunction query, it onl
another way is to use payload http://wiki.apache.org/solr/Payloads
the advantage of payload is that you only need one field and can make frq
file smaller than use two fields. but the disadvantage is payload is stored
in prx file, so I am not sure which one is fast. maybe you can try them
both.
On
http://wiki.apache.org/solr/SolrCaching
On Fri, Apr 13, 2012 at 2:30 PM, Kashif Khan wrote:
> Does anyone explain what does the following parameters mean in SOLR cache
> statistics?
>
> *name*: queryResultCache
> *class*: org.apache.solr.search.LRUCache
> *version*: 1.0
> *description*: LRU
hi
I checked out the trunk and played with its new soft commit
feature. it's cool. But I've got a few questions about it.
By reading some introductory articles and wiki, and hasted code
reading, my understand of it's implementation is:
For normal commit(hard commit), we should flush all in
you should reverse your sort algorithm. maybe you can override the tf
method of Similarity and return -1.0f * tf(). (I don't know whether
default collector allow score smaller than zero)
Or you can hack this by add a large number or write your own
collector, in its collect(int doc) method, you can
as for version below 4.0, it's not possible because lucene's score
model. position information is stored, but only used to support phrase
query. it just tell us whether a document is matched, but we can boost
a document. The similar problem is : how to implement proximity boost.
for 2 search terms,
for this version, you may consider using payload for position boost.
you can save boost values in payload.
I have used it in lucene api where anchor text should weigh more than
normal text. but I haven't used it in solr.
some searched urls:
http://wiki.apache.org/solr/Payloads
http://digitalpebble.
don't score by relevance and score by document id may speed it up a little?
I haven't done any test of this. may be u can give it a try. because
scoring will consume
some cpu time. you just want to match and get total count
On Wed, May 2, 2012 at 11:58 PM, vybe3142 wrote:
> I can achieve this by
+ before term is correct. in lucene term includes field and value.
Query ::= ( Clause )*
Clause ::= ["+", "-"] [ ":"] ( | "(" Query ")" )
<#_TERM_CHAR: ( <_TERM_START_CHAR> | <_ESCAPED_CHAR> | "-" | "+" ) >
<#_ESCAPED_CHAR: "\\" ~[] >
in lucene query syntax, you can't express a term value i
query=parser.parse(q);
System.out.println(query);
On Thu, May 10, 2012 at 8:20 AM, Li Li wrote:
> + before term is correct. in lucene term includes field and value.
>
> Query ::= ( Clause )*
>
> Clause ::= ["+", "-"] [ ":"] ( | "
you should define your search first.
if the site is www.google.com. how do you match it. full string
matching or partial matching. e.g. is "google" should match? if it
does, you should write your own analyzer for this field.
On Tue, May 22, 2012 at 2:03 PM, Shameema Umer wrote:
> Sorry,
> Please
you should find some clues from tomcat log
在 2012-5-22 晚上7:49,"Spadez" 写道:
> Hi,
>
> This is the install process I used in my shell script to try and get Tomcat
> running with Solr (debian server):
>
>
>
> I swear this used to work, but currently only Tomcat works. The Solr page
> just comes up wi
yes, I am also interested in good performance with 2 billion docs. how
many search nodes do you use? what's the average response time and qps
?
another question: where can I find related paper or resources of your
algorithm which explains the algorithm in detail? why it's better than
google site(b
This sounds wrong, but it is true. With
> RAMDirectory, Java has to work harder doing garbage collection.
>
> On Fri, Jun 8, 2012 at 1:30 AM, Li Li wrote:
>> hi all
>> I want to use lucene 3.6 providing searching service. my data is
>> not very large, raw data is le
at 4:45 PM, Michael Kuhlmann wrote:
> Set the swapiness to 0 to avoid memory pages being swapped to disk too
> early.
>
> http://en.wikipedia.org/wiki/Swappiness
>
> -Kuli
>
> Am 11.06.2012 10:38, schrieb Li Li:
>
>> I have roughly read the codes of RAMDirectory. it
d a "small" segment. Every night I will
merge them. new added documents will flush into a new segment and I
will merge the new generated segment and the small one.
Our update operations are not very frequent.
On Mon, Jun 11, 2012 at 4:59 PM, Paul Libbrecht wrote:
> Li Li,
>
> have yo
ss
>
> -Kuli
>
> Am 11.06.2012 10:38, schrieb Li Li:
>
>> I have roughly read the codes of RAMDirectory. it use a list of 1024
>> byte arrays and many overheads.
>> But as far as I know, using MMapDirectory, I can't prevent the page
>> faults. OS will swap less
persist your index,
> you'll need to live with disk IO anyway.
>
> Greetings,
> Kuli
>
> Am 11.06.2012 11:20, schrieb Li Li:
>
>> I am sorry. I make a mistake. even use RAMDirectory, I can not
>> guarantee they are not swapped out.
>>
>> On Mon,
t;
> http://en.wikipedia.org/wiki/Swappiness
>
> -Kuli
>
> Am 11.06.2012 10:38, schrieb Li Li:
>
>> I have roughly read the codes of RAMDirectory. it use a list of 1024
>> byte arrays and many overheads.
>> But as far as I know, using MMapDirectory, I can't prev
ul approach
> http://lucene.472066.n3.nabble.com/High-response-time-after-being-idle-tp3616599p3617604.html.
>
> On Mon, Jun 11, 2012 at 3:02 PM, Toke Eskildsen
> wrote:
>
>> On Mon, 2012-06-11 at 11:38 +0200, Li Li wrote:
>> > yes, I need average query time less than
hi all
I confronted a strange problem when feed data to solr. I started
feeding and then Ctrl+C to kill feed program(post.jar). Then because
XML stream is terminated unnormally, DirectUpdateHandler2 will throw
an exception. And I goto the index directory and sorted it by date.
newest files are f
1. make sure the the port is not used.
2. ./bin/shutdown.sh && tail -f logs/xxx to see what the server is doing
if you just feed data or modified index, and don't flush/commit,
when shutdowning, it will do something.
2010/12/1 Robert Petersen :
> Greetings, we're wondering why we can issue th
you may implement your own MergePolicy to keep on large index and
merge all other small ones
or simply set merge factor to 2 and the largest index not be merged by
set maxMergeDocs less than the docs in the largest one.
So there is one large index and a small one. when adding a little
docs, they wi
I think it will not because default configuration can only have 2
newSearcher threads but the delay will be more and more long. The
newer newSearcher will wait these 2 ealier one to finish.
2010/12/1 Jonathan Rochkind :
> If your index warmings take longer than two minutes, but you're doing a
> co
write the document
into log file.
and after flushing, we delete corresponding lines in the log file
if the program corrput. we will redo the log and add them into RAMDirectory.
Any one has done similar work?
2010/12/1 Li Li :
> you may implement your own MergePolicy to keep on large index
see maxMergeDocs(maxMergeSize) in solrconfig.xml. if the segment's
documents size is larger than this value, it will not be merged.
2010/12/27 Rok Rejc :
> Hi all,
>
> I have created an index, commited the data and after that I had run the
> optimize with default parameters:
>
> http://localhost:8
maybe you can consult log files and it may show you something
btw how do you post your command?
do you use curl 'http://localhost:8983/solr/update?optimize=true' ?
or posting a xml file?
2010/12/27 Rok Rejc :
> On Mon, Dec 27, 2010 at 3:26 AM, Li Li wrote:
>
>> see maxMerg
ser by entering url
>>
>> http://localhost:8080/myindex/update?optimize=true
>> or
>> http://localhost:8080/myindex/update?stream.body=
>>
>> Thanks.
>>
>>
>> On Mon, Dec 27, 2010 at 7:12 AM, Li Li wrote:
>>
>>> maybe you can consul
do you mean queryResultCache? you can comment related paragraph in
solrconfig.xml
see http://wiki.apache.org/solr/SolrCaching
2011/2/8 Isan Fulia :
> Hi,
> My solrConfig file looks like
>
>
>
>
>
> multipartUploadLimitInKB="2048" />
>
>
> default="true" />
>
> class="org.apache.so
fieldcache - that can't be
> commented out. This cache will always jump into the picture
>
> If I need to do such things, I restart the whole tomcat6 server to flush ALL
> caches.
>
> 2011/2/11 Li Li
>
>> do you mean queryResultCache? you can comment r
hi
it seems my mail is judged as spam.
Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the recipient
domain. We recommend contacting the other email provider for further
information about the cause of this error. The error that the other
bility of master. we want to use
some synchronization mechanism to allow only 1 or 2 ReplicationHandler
threads are doing CMD_GET_FILE command.
Is that solution feasible?
2011/3/11 Li Li
> hi
> it seems my mail is judged as spam.
> Technical details of permanent failure:
>
That's the job your analyzer should concern
2011/3/17 Andy :
> Hi,
>
> For my Solr server, some of the query strings will be in Asian languages such
> as Chinese or Japanese.
>
> For such query strings, would the Standard or Dismax request handler work? My
> understanding is that both the Stand
will UseCompressedOops be useful? for application using less than 4GB
memory, it will be better that 64bit reference. But for larger memory
using application, it will not be cache friendly.
"JRocket the definite guide" says: "Naturally, 64 GB isn't a
theoretical limit but just an example. It was me
has master updated index during replication?
this could occur when it failed to download any file becuase network problem.
209715200!=583644834 means the size of the file slave fetched is 583644834
but it only download 209715200 bytes. maybe the connection is time out.
2011/2/16 Markus Jelsma :
there are 3 conditions that will trigger an auto flushing in lucene
1. size of index in ram is larger than ram buffer size
2. documents in mamory is larger than the number set by setMaxBufferedDocs.
3. deleted term number is larger than the ratio set by
setMaxBufferedDeleteTerms.
auto flushing by
1 - 100 of 284 matches
Mail list logo