post.jar only support utf8. you must do the transformation.
2011/4/1 Jan Høydahl :
> Hi,
>
> Testing the new Solr 3.1 release under Windows XP and Java 1.6.0_23
>
> When trying to post example\exampledocs\gb18030-example.xml using post.jar I
> get this error:
> % java -jar post.jar gb18030-exampl
hi all,
I want to provide full text searching for some "small" websites.
It seems cloud computing is popular now. And it will save costs
because it don't need employ engineer to maintain
the machine.
For now, there are many services such as amazon s3, google app
engine, ms azure etc. I am
hi all,
I read Apache Solr 3.1 Released Note today and found that
MMapDirectory is now the default implementation in 64 bit Systems.
I am now using solr 1.4 with 64-bit jvm in Linux. how can I use
MMapDirectory? will it improve performance?
if MMapDirectory will perform better for you with Linux over
>> NIOFSDir. I'm pretty sure in Trunk/4.0 it's the default for Windows and
>> maybe Solaris. In Windows, there is a definite advantage for using
>> MMapDirectory on a 64-bit system.
>>
>> James Dye
hi all,
I tested it following the instructions in
http://wiki.apache.org/solr/SpellCheckComponent. but it seems something
wrong.
the sample url in the wiki is
http://solr:8983/solr/select?q=*:*&spellcheck=true&spellcheck.build=true&spellcheck.q=toyata&qt=spell&shards.qt=spell&shards=solr-
hi all,
I follow the wiki http://wiki.apache.org/solr/SpellCheckComponent
but there is something wrong.
the url given my the wiki is
http://solr:8983/solr/select?q=*:*&spellcheck=true&spellcheck.build=true&spellcheck.q=toyata&qt=spell&shards.qt=spell&shards=solr-shard1:8983/solr,solr-shar
this may need something like language models to suggest.
I found an issue https://issues.apache.org/jira/browse/SOLR-2585
what's going on with it?
On Thu, Aug 18, 2011 at 11:31 PM, Valentin wrote:
> I'm trying to configure a spellchecker to autocomplete full sentences from my
> query.
>
> I've
g directly, not in url, but should
> work the same.
> Maybe an issue in your spell request handler.
>
>
> 2011/8/19 Li Li
>
>> hi all,
>> I follow the wiki http://wiki.apache.org/solr/SpellCheckComponent
>> but there is something wrong.
>> t
I haven't used suggest yet. But in spell check if you don't
provide spellcheck.q, it will
analyze the q parameter by a converter which "tokenize" your query.
else it will use the analyzer of the field to process parameter q.
If you don't want to tokenize query, you should pass spellcheck.q
NullPointerException? do you have the full exception print stack?
On Fri, Aug 19, 2011 at 6:49 PM, Valentin wrote:
>
> Li Li wrote:
>> If you don't want to tokenize query, you should pass spellcheck.q
>> and provide your own analyzer such as keyword analyzer.
>
>
Line 476 of SpellCheckComponent.getTokens of mine is assert analyzer != null;
it seems our codes' versions don't match. could you decompile your
SpellCheckComponent.class ?
On Fri, Aug 19, 2011 at 7:23 PM, Valentin wrote:
> My beautiful NullPointer Exception :
>
>
> SEVERE: java.lang.NullPoin
or your analyzer is null? any other exception or warning in your log file?
On Fri, Aug 19, 2011 at 7:37 PM, Li Li wrote:
> Line 476 of SpellCheckComponent.getTokens of mine is assert analyzer !=
> null;
> it seems our codes' versions don't match. could
hi all
I am interested in vertical crawler. But it seems this project is not
very active. It's last update time is 11/16/2009
seems that the indexwriter wants to flush but need to wait others become
idle. but i see you the n gram filter is working. is your field's value too
long? you sould also tell us average load the system. the free memory and
memory used by jvm
在 2012-6-27 晚上7:51,"Arkadi Colson" 写道:
> Anybody an idea
1. precisionStep is used for ranging query of Numeric Fields. see
http://lucene.apache.org/core/old_versioned_docs/versions/3_5_0/api/all/org/apache/lucene/search/NumericRangeQuery.html
2. positionIncrementGap is used for phrase query of multi-value fields
e.g. doc1 has two titles.
title1: ab cd
I think they are logically the same. but 1 may be a little bit faster than 2
On Thu, Jun 28, 2012 at 5:59 AM, Rublex wrote:
> Hi,
>
> Can someone explain to me please why these two queries return different
> results:
>
> 1. -PaymentType:Finance AND -PaymentType:Lease AND -PaymentType:Cash *(700
>
could you please use jstack to dump the call stacks?
On Thu, Jun 28, 2012 at 2:53 PM, Arkadi Colson wrote:
> It now hanging for 15 hour and nothing changes in the index directory.
>
> Tips for further debugging?
>
>
> On 06/27/2012 03:50 PM, Arkadi Colson wrote:
>>
>> I'm sending files to solr wi
hu, Jun 28, 2012 at 3:51 PM, ZHANG Liang F
wrote:
> Thanks a lot, but the precisionStep is still very vague to me! Could you give
> me a example?
>
> -Original Message-
> From: Li Li [mailto:fancye...@gmail.com]
> Sent: 2012年6月28日 11:25
> To: solr-user@lucene.ap
create an field for exact match. it is a optional boolean clause
在 2012-8-11 下午1:42,"abhayd" 写道:
> hi
>
> I have documents like
> iphone 4 - white
> iphone 4s - black
> ipone4 - black
>
> when user searches for iphone 4 i would like to show iphone 4 docs first
> and
> iphone 4s after that.
> Simil
在 2012-7-2 傍晚6:37,"Nicholas Ball" 写道:
>
>
> That could work, but then how do you ensure commit is called on the two
> cores at the exact same time?
that may needs something like two phrase commit in relational dB. lucene
has prepareCommit, but to implement 2pc, many things need to do.
> Also, any w
do you really need this?
distributed transaction is a difficult problem. in 2pc, every node could
fail, including coordinator. something like leader election needed to make
sure it works. you maybe try zookeeper.
but if the transaction is not very very important like transfer money in
bank, you can
http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit
On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
wrote:
>
> Haven't managed to find a good way to do this yet. Does anyone have any
> ideas on how I could implement this feature?
> Really need to move docs across from on
are there any built-in tools for performance test? thanks
I want to use fast highlighter in solr1.4 and find a issue in
https://issues.apache.org/jira/browse/SOLR-1268
File Name Date Attached ↑
Attached By Size
SOLR-1268.patch 2010-02-05 10:32 PM
Koji Sekiguc
Apr 12, 2011, at 11:47 AM, Erick Erickson wrote:
> Yes. You need to put, say, a load balancer on front of your slaves
> and distribute the requests to the slave.
>
> Best
> Erick
>
> On Tue, Apr 12, 2011 at 2:20 PM, Li Tan wrote:
>
>> I have 1 master, and 2 sla
You should just ask me.
Sent from my iPhone
On Apr 13, 2011, at 11:27 AM, soumya rao wrote:
> Thanks for the reply Josh.
>
> And where should I make changes in ruby to add filters?
>
> Soumya
>
> On Wed, Apr 13, 2011 at 11:20 AM, Joshua Bouchair <
> joshuabouch...@wasserstrom.com> wrote:
>
Hey guys, how do you curl update all the XML inside a folder from A-D?
Example: curl http://localhost:8080/solr update
Sent from my iPhone
Looks like dependencies. Did you or him included the dependencies in the
solrconfig?
Sent from my iPhone
On Apr 19, 2011, at 8:35 AM, Oleg Tikhonov wrote:
>> Hello everybody,
>>
>> Recently, I got a message from a guy who was asking about
>> TikaEntityProcessor.
>> He uses Solr 1.4 and Tika 0
Can you post the dataconfig.XML? Probably you didn't use batch size
Sent from my iPhone
On Apr 21, 2011, at 5:09 PM, Scott Bigelow wrote:
> Thanks for the e-mail. I probably should have provided more details,
> but I was more interested in making sure I was approaching the problem
> correctly (
on its own?
Thanks,
Li
But I don't think it will affect the core status.
Do you guys have any idea about why this particular core is not published
as active since from the log, most steps are done except the very last one
to publish info to ZK.
Thanks,
Li
On Thu, Apr 21, 2016 at 7:08 AM, Rajesh Hazari
wrote
r calls succeeds and the next zk ping should
bring the core back to normal? right? We have an active monitor running at
the same time querying every core in distrib=false mode and every query
succeeds.
Thanks,
Li
On Tue, Apr 26, 2016 at 6:20 PM, Erick Erickson
wrote:
> One of the reasons this
Hi,
I am starting using join parser with our solr. We have some default fields.
They are defined in solrconfig.xml:
edismax
explicit
10
all_text number party name all_code ent_name
all_text number^3 name^5 party^3 all_code^2
ent_name^7
id descripti
ng else that would help. You might review:
>
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> Erick
>
> On Fri, Apr 3, 2015 at 10:58 AM, Frank li wrote:
> > Hi,
> >
> > I am starting using join parser with our solr. We have some default
>
The error message was from the query with "debug=query".
On Mon, Apr 6, 2015 at 11:49 AM, Frank li wrote:
> Hi Erick,
>
>
> Thanks for your response.
>
> Here is the query I am sending:
>
> http://dev-solr:8080/solr/collection1/select?q={!join+from=litigation_
r, you need to use
> edismax or explicitly create the multiple clauses.
>
> I'm not quite sure what the join parser is doing with the df
> parameter. So my first question is "what happens if you just use a
> single field for df?".
>
> Best,
> Erick
>
> On Mon,
We did two SOLR qeries and they supposed to return the same results but
didnot:
Query 1: all_text:(US 4,568,649 A)
"parsedquery": "(+((all_text:us ((all_text:4 all_text:568 all_text:649
all_text:4568649)~4))~2))/no_coord",
Result: "numFound": 0,
Query 2: all_text:(US 4568649)
"parsedquery": "(
Hi Yonik,
I am reading your blog. It is helpful. One question for you, for following
example,
curl http://localhost:8983/solr/query -d 'q=*:*&rows=0&
json.facet={
categories:{
type : terms,
field : cat,
sort : { x : desc},
facet:{
x : "avg(price)",
y : "sum(p
}
<http://localhost:8983/solr/demo/query?q=apple&json.facet=%7Bx:%27avg%28price%29%27%7D>
I really appreciate your help.
Frank
<http://localhost:8983/solr/demo/query?q=apple&json.facet=%7Bx:%27avg%28price%29%27%7D>
On Thu, May 7, 2015 at 2:24 PM, Yonik Seeley wrote:
> On
Is there any book to read so I won't ask such dummy questions? Thanks.
On Thu, May 7, 2015 at 2:32 PM, Frank li wrote:
> This one does not have problem, but how do I include "sort" in this facet
> query. Basically, I want to write a solr query which can sort the fa
Hi Yonik,
Any update for the question?
Thanks in advance,
Frank
On Thu, May 7, 2015 at 2:49 PM, Frank li wrote:
> Is there any book to read so I won't ask such dummy questions? Thanks.
>
> On Thu, May 7, 2015 at 2:32 PM, Frank li wrote:
>
>> This one does not hav
ely easier to use "-d" with curl...
>
> curl "http://localhost:8983/solr/techproducts/query"; -d
> 'q=*:*&json.facet={cats:{terms:{field:cat,sort:"count asc"}}}'
>
> That also allows you to format it nicer for reading as well:
>
> c
Here is our SOLR query:
http://qa-solr:8080/solr/select?q=type:PortalCase&json.facet={categories:{terms:{field:campaign_id_ls,sort:%27count+asc%27}}}&rows=0
I replaced "cats" with "categories". It is still not working.
On Sun, May 10, 2015 at 12:10 AM, Frank li
I figured it out now. It works. "cats" just a name, right? It does not
matter what is used.
Really appreciate your help. This is going to be really useful. I meant
"json.facet".
On Sun, May 10, 2015 at 12:13 AM, Frank li wrote:
> Here is our SOLR query:
>
>
> h
Thanks for your help. I figured it out. Just as you said. Appreciate your
help. Somehow forgot to reply your post.
On Wed, Apr 29, 2015 at 9:24 AM, Chris Hostetter
wrote:
>
> : We did two SOLR qeries and they supposed to return the same results but
> : did not:
>
> the short answer is: if you wa
, our
solr restart will be more robust.
Any suggestions will be appreciated.
Thanks,
Li
heckLive=true&core=test_collection_112_shard1_replica1
&wt=javabin&onlyIfLeader=true&version=2} status=0 QTime=4001
Is there any known bug? all collections are empty.
Thanks,
Li
On Mon, May 16, 2016 at 12:50 PM, Anshum Gupta
wrote:
> I think you are approaching the problem al
This happened when the second time I'm performing restart. But after that,
every time this collection is stuck at here. If I restart the leader node
as well, the core can get out of the recovering state
On Mon, May 16, 2016 at 5:00 PM, Li Ding wrote:
> Hi Anshum,
>
> This is fo
Hi Jack,
Do you have a date for the new version of your book:
solr_4x_deep_dive_early_access?
Thanks,
Fudong
On Mon, Oct 21, 2013 at 10:39 AM, Jack Krupansky wrote:
> Take a look at the unit tests for various "value sources", and find a Jira
> that added some value source and look at the patc
We got different results for these two queries. The first one returned 115
records and the second returns 179 records.
Thanks,
Fudong
I have a Solr server indexes 2500 documents (up to 50MB each, ave 3MB) to Solr
server. When running on Solr 4.0 I managed to finish index in 3 hours.
However after we upgrade to Solr 4.9, the index need 3 days to finish.
I've done some profiling, numbers I get are:
size figure of document,t
HI Shawn,
Thanks for your reply.
The memory setting of my Solr box is
12G physically memory.
4G for java (-Xmx4096m)
The index size is around 4G in Solr 4.9, I think it was over 6G in Solr 4.0.
I do think the RAM size of java is one of the reasons for this slowness. I'm
doing one big commit an
n at https://issues.apache.org/jira/browse/LUCENE-5914.
Best,
Erick
____
From: Li, Ryan
Sent: Friday, September 05, 2014 3:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr add document over 20 times slower after upgrade from 4.0 to
4.9
HI Shawn,
Thanks for y
Hi Guys,
Just some update.
I've tried with Solr 4.10 (same code for Solr 4.9). And that has the same index
speed as 4.0. The only problem left now is that Solr 4.10 takes more memory
than 4.0 so I'm trying to figure out what is the best number for Java heap size.
I think that proves there is s
We have a query which has both sort and group.sort. What we are expecting
is that we can use sort to sort groups but inside the group we have a
different sort.
However, looks like sort is over-writting the sorting order inside groups.
Can any one of you help us on this?
Basically we want to sort
Hi,
I don't seem to be able to find any info on the possibility to get stats on
dynamic fields. stats=true&states.field=xyz_* appears to literally treat
"xyz_*" as the field name with a star. Is there a way to get stats on
dynamic fields without explicitly listing them in the query?
Thanks!
Li
-user@lucene.apache.org
Subject: Re: How to share config files in SolrCloud between multiple
cores(collections)
To share configs in SolrCloud you just upload a single config set and then link
it to multiple collections. You don't actually use solr.xml to do it.
- Mark
On Mar 19, 2013, at 10:43 AM,
How to shut down the SolrCloud? Just kill all nodes?
Regards,
Ivan
This email message and any attachments are for the sole use of the intended
recipients and may contain proprietary and/or confidential information which
may be privileged or otherwise protected from disclosure. Any unauthorized
seem to be doing anything.
>
>
>
> How can I generate a dump otherwise to see, why solr hangs?
>
>
>
>
>
>
>
> Regards,
>
> Rohit
>
>
>
>
--
Best Regards.
Jerry. Li | 李宗杰
hi
建议你自己搭个环境测试一下吧,1M这点儿数据一点儿问题没有
2011/9/30 秦鹏凯 :
> Hi all,
>
> Now I'm doing research on solr distributed search, and it
> is said documents more than one million is reasonable to use
> distributed search.
> So I want to know, does anyone have the test
> result(Such as time cost) of using singl
Hi, there,
I am trying to look into the performance impact of data replication on
query response time. To get a clear picture, I would like to know how
to get the size of data being replicated for each commit. Through the
admin UI, you may read a x of y G data is being replicated; however,
"y" is
Dear all,
I have a question when sorting retrieved data from Solr. As I know, Lucene
retrieves data according to the degree of keyword matching on text field
(partial matching).
If I search data by string field (complete matching), how does Lucene sort
the retrieved data?
If I add some filters,
Dear all,
I am using SolrJ to implement a system that needs to provide users with
searching services. I have some questions about Solr searching as follows.
As I know, Lucene retrieves data according to the degree of keyword
matching on text field (partial matching).
But, if I search data by str
p N data you have
> got.
>
> Sent from my iPad
>
> On Jan 21, 2012, at 1:33 PM, Bing Li wrote:
>
> > Dear all,
> >
> > I am using SolrJ to implement a system that needs to provide users with
> > searching services. I have some questions about Solr searc
Kant wrote:
> Lucene has a mechanism to "boost" up/down documents using your custom
> ranking algorithm. So if you come up with something like Pagerank
> you might do something like doc.SetBoost(myboost), before writing to index.
>
>
>
> On Sat, Jan 21, 2012 at 5:07
Hi, there,
Just picked up SolrJ few days ago. I have my Solr Server set up, data
loaded, and everything worked fine with the web admin page. Then problem
came when I was trying to use SolrJ to interact with the Solr server. I
was stuck with "NoClassNotFoundException" yesterday. Being new to the
d
folder i guess?
On Oct 1, 2010, at 10:50 AM, Xin Li wrote:
> Hi, there,
>
> Just picked up SolrJ few days ago. I have my Solr Server set up, data
> loaded, and everything worked fine with the web admin page. Then
problem
> came when I was trying to use SolrJ to interact with the S
1.4.1
Xin,
I also had a similar error when I picked up SolrJ.
See the first section of this wiki page for the extra jars (the ones not found
in the dist directory):
http://wiki.apache.org/solr/Solrj
-Jon
-Original Message-
From: Xin Li [mailto:x...@book.com]
Sent: Friday, October 01
I asked the exact question the day before. If you or anyone else has
pointer to the solution, please share on the mail list. For now, I am
using Perl script instead to query Solr server.
Thanks,
Xin
-Original Message-
From: ankita shinde [mailto:ankitashinde...@gmail.com]
Sent: Saturday,
System.out.println(e);
System.exit(1);
}
SolrDocumentList docs = rsp.getResults();
for (SolrDocument doc : docs) {
System.out.println(doc.toString());
}
}
}
On Oct 4, 2010, at 11:26 AM, Xin
Hi,
I am looking for a quick solution to improve a search engine's spell checking
performance. I was wondering if anyone tried to integrate Google SpellCheck API
with Solr search engine (if possible). Google spellcheck came to my mind
because of two reasons. First, it is costly to clean up the
Oops, never mind. Just read Google API policy. 1000 queries per day limit & for
non-commercial use only.
-Original Message-
From: Xin Li
Sent: Monday, October 18, 2010 3:43 PM
To: solr-user@lucene.apache.org
Subject: Spell checking question from a Solr novice
Hi,
I am looking
As we know we can use browser to check if Solr is running by going to
http://$hostName:$portNumber/$masterName/admin, say
http://localhost:8080/solr1/admin. My questions is: are there any ways to check
it using command line? I used "curl http://localhost:8080"; to check my Tomcat,
it worked fin
Thanks Bob and Ahmet,
"curl http://localhost:8080/solr1/admin/ping"; works fine :)
Xin
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: Monday, October 25, 2010 4:03 PM
To: solr-user@lucene.apache.org
Subject: Re: command line to check if Solr is up running
> M
If you just want a quick way to query Solr server, Perl module
Webservice::Solr is pretty good.
On Mon, Nov 1, 2010 at 4:56 PM, Lance Norskog wrote:
> Yes, you can write your own app to read the file with SVNkit and post
> it to the ExtractingRequestHandler. This would be easiest.
>
> On Mon, N
hi there,
I am quite new to Solr and have a very basic question about storing and
indexing the document.
I am trying with the Solr example, and when I run command like 'java -jar
post.jar foo/test.xml', it gives me the feeling that solr will index the
given file, no matter where it is store, and
hi
I am new to solr and I would like to use Solr as my document server, plus
search engine. But solr is not CMIS compatible( While it shoud not be, as
it is not build as a pure document management server). In that sense, I
would build another layer beyond Solr so that the exposed interface would
I want to make something like Alfresco, but not having that many features.
And I'd like to utilise the searching ability of Solr.
On Fri, Jan 18, 2013 at 4:11 PM, Gora Mohanty wrote:
> On 18 January 2013 10:36, Nicholas Li wrote:
> > hi
> >
> > I am new to solr and I
; A colleague of mine when I was working for Sourcesense made a CMIS
> plugin for Solr. It was one way, and we used it to index stuff out of
> Alfresco into Solr. I can't search for it now, let me know if you can't
> find it.
>
> Upayavira
>
> On Fri, Jan 18, 2013, at 05:35 AM
We have multiple cores with the same configurations, before using SolrCloud, we
can use relative path in solr.xml. But with Solr4, is seems denied for using
relative path for the schema and config in solr.xml.
Regards,
Ivan
This email message and any attachments are for the sole use of the inte
e:
> You can update the document in the index quite frequently. IDNK what
> your requirement is, another option would be to boost query time.
>
> On Sun, Jan 22, 2012 at 5:51 AM, Bing Li wrote:
> > Dear Shashi,
> >
> > Thanks so much for your reply!
> >
> &
Dear all,
I wonder how data in HBase is indexed? Now Solr is used in my system
because data is managed in inverted index. Such an index is suitable to
retrieve unstructured and huge amount of data. How does HBase deal with the
issue? May I replaced Solr with HBase?
Thanks so much!
Best regards,
>> It's on our road map.
>>
>> FYI
>>
>> On Wed, Feb 22, 2012 at 9:28 AM, Bing Li wrote:
>>
>> > Jacques,
>> >
>> > Yes. But I still have questions about that.
>> >
>> > In my system, when users search with a keywor
nt schema and index the
> rank too for range queries and such. is my understanding of your scenario
> wrong?
>
> thanks
>
>
> On Wed, Feb 22, 2012 at 9:51 AM, Bing Li wrote:
>
>> Mr Gupta,
>>
>> Thanks so much for your reply!
>>
>> In my use
According to my knowledge, Solr cannot support this.
In my case, I get data by keyword-matching from Solr and then rank the data
by PageRank after that.
Thanks,
Bing
On Wed, Apr 4, 2012 at 6:37 AM, Manuel Antonio Novoa Proenza <
mano...@estudiantes.uci.cu> wrote:
> Hello,
>
> I have in my Solr
which I can
transmit. After transmission, how to append them to the old indexes? Does
the appending block searching?
Thanks so much for your help!
Bing Li
o existing indexes?
Does the appending affect the querying?
I am learning Solr. But it seems that Solr does that for me. However, I have
to set up Tomcat to use Solr. I think it is a little bit heavy.
Thanks!
Bing Li
queries must be responded instantly. That's
what I mean "appending". Does it happen in Solr?
Best,
Bing
On Sat, Nov 20, 2010 at 1:58 AM, Gora Mohanty wrote:
> On Fri, Nov 19, 2010 at 10:53 PM, Bing Li wrote:
> > Hi, all,
> >
> > Since I didn't find that L
to the updated index upon
> successful replication.
>
> Older versions of Solr used rsynch & etc.
>
> Best
> Erick
>
> On Fri, Nov 19, 2010 at 10:52 AM, Bing Li wrote:
>
>> Hi, all,
>>
>> I am working on a distributed searching system. Now I have
ching large indexes in a large scale distributed environment, right?
Thanks!
Bing
On Sat, Nov 20, 2010 at 3:01 AM, Gora Mohanty wrote:
> On Sat, Nov 20, 2010 at 12:05 AM, Bing Li wrote:
> > Dear Erick,
> >
> > Thanks so much for your help! I am new in Solr. So I have no id
ttp11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
--
Best Regards.
Jerry. Li | 李宗杰
id
text
On Wed, Dec 1, 2010 at 2:50 PM, Gora Mohanty wrote:
> On Wed, Dec 1, 2010 at 10:56 AM, Jerry Li wrote:
> > Hi team
> >
> > My solr version is 1.4
> > There is an ArrayIndexOutOfBoundsException wh
Hi
It seems work fine again after I change "author" field type from text to
string, could anybody give some info about it? very appriciated.
On Wed, Dec 1, 2010 at 5:20 PM, Jerry Li wrote:
> sorry for lost, following is my schema.xml config and I use IKTokenizer for
> C
gt; http://wiki.apache.org/solr/FAQ#Why_Isn.27t_Sorting_Working_on_my_Text_Fields.3F
>
> And also see Erick's explanation
>
> http://search-lucene.com/m/7fnj1TtNde/sort+on+a+tokenized+field&subj=Re+Solr+sorting+problem
>
>
>
>
--
Best Regards.
Jerry. Li
wish to import the Lucene indexes into Solr, may I have any other
approaches? I know that Solr is a serverized Lucene.
Thanks,
Bing Li
For solr replication, we can send command to disable replication. Does
anyone know where i can verify the replication enabled/disabled
setting? i cannot seem to find it on dashboard or details command
output.
Thanks,
Xin
Does anything know?
Thanks,
-Original Message-
From: Xin Li [mailto:xin.li@gmail.com]
Sent: Thursday, December 02, 2010 12:25 PM
To: solr-user@lucene.apache.org
Subject: disabled replication setting
For solr replication, we can send command to disable replication. Does
anyone know
Dear all,
I am a new user of Solr. Now I am just trying to try some basic samples.
Solr can be started correctly with Tomcat.
However, when putting a new schema.xml under SolrHome/conf and starting
Tomcat again, I got the following two exceptions.
The Solr cannot be started correctly unless usin
I think this is expected behavior. You have to issue the "details"
command to get the real indexversion for slave machines.
Thanks,
Xin
On Mon, Dec 6, 2010 at 11:26 AM, Markus Jelsma
wrote:
> Hi,
>
> The indexversion command in the replicationHandler on slave nodes returns 0
> for indexversion a
mbers although the replication handler's
> source code seems to agree with you judging from the comments.
>
> On Monday 06 December 2010 17:49:16 Xin Li wrote:
>> I think this is expected behavior. You have to issue the "details"
>> command to get the real indexver
101 - 200 of 284 matches
Mail list logo