Stephan and all,
I am evaluating this like you are. You may want to check
http://www.tomkleinpeter.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/.
I would appreciate if others can shed some light on this, too.
Bests,
James
On Fri, Sep 10, 2010 at 6:07 AM, Stephan Raemy wrote:
> Hi
before stressing test, Should i close SolrCache?
which tool u use?
How to do stress test correctly?
Any pointers?
--
regards
j.L ( I live in Shanghai, China)
Are you sure the url is correct?
--
regards
j.L ( I live in Shanghai, China)
solr have much fieldtype, like: integer,long, double, sint, sfloat,
tint,tfloat,,and more.
but lucene not fieldtype,,just name and value, value only string.
so i not sure is it a problem when i use solr to search( index made by
lucene).
--
regards
j.L ( I live in Shanghai, China)
I use solr to search and index is made by lucene. (not
EmbeddedSolrServer(wiki is old))
Is it problem when i use solr to search?
which the difference between Index(made by lucene and solr)?
thks
--
regards
j.L ( I live in Shanghai, China)
i use lucene-core-2.9-dev.jar, lucene-misc-2.9-dev.jar
On Thu, Jul 2, 2009 at 2:02 PM, James liu wrote:
> i try http://wiki.apache.org/solr/MergingSolrIndexes
>
> system: win2003, jdk 1.6
>
> Error information:
>
>> Caused by: java.lan
i try http://wiki.apache.org/solr/MergingSolrIndexes
system: win2003, jdk 1.6
Error information:
> Caused by: java.lang.ClassNotFoundException:
> org.apache.lucene.misc.IndexMergeTo
> ol
> at java.net.URLClassLoader$1.run(Unknown Source)
> at java.security.AccessController.doPriv
if user use keyword to search and get summary(auto generated by
keyword)...like this
doc filed: id, text
id: 001
text:
> Open source is a development method for software that harnesses the power
> of distributed peer review and transparency of process. The promise of open
> source is better qual
*Collins:
*i don't know what u wanna say?
--
regards
j.L ( I live in Shanghai, China)
On Mon, Feb 16, 2009 at 4:30 PM, revathy arun wrote:
> Hi,
>
> When I index chinese content using chinese tokenizer and analyzer in solr
> 1.3 ,some of the chinese text files are getting indexed but others are not.
>
are u sure ur analyzer can do it good?
if not sure, u can use analzyer link in
first: u not have to restart solr,,,u can use new data to replace old data
and call solr to use new search..u can find something in shell script which
with solr
two: u not have to restart solr,,,just keep id is same..example: old
id:1,title:hi, new id:1,title:welcome,,just index new data,,it will
1: modify ur schema.xml:
like
2: add your field:
3: add your analyzer to {solr_dir}\lib\
4: rebuild newsolr and u will find it in {solr_dir}\dist
5: follow tutorial to setup solr
6: open your browser to solr admin page, find analyzer to check analyzer, it
will tell u how to ana
U can find answer in tutorial or example
On Tuesday, June 2, 2009, The Spider wrote:
>
> Hi,
> I am using solr nightly bind for my search.
> I have to search in the location field of the table which is not my default
> search field.
> I will briefly explain my requirement below:
> I want to ge
u means how to config solr which support chinese?
Update problem?
On Tuesday, June 2, 2009, Fer-Bj wrote:
>
> I'm sending 3 files:
> - schema.xml
> - solrconfig.xml
> - error.txt (with the error description)
>
> I can confirm by now that this error is due to invalid characters for the
> XML form
Up to your solr client.
On Mon, Nov 24, 2008 at 1:24 PM, souravm <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Looking for some insight on distributed search.
>
> Say I have an index distributed in 3 boxes and the index contains time and
> text data (typical log file). Each box has index for different tim
first u sure the xml is utf-8,,and field value is utf-8,,
second u should post xml by utf-8
my advice : All encoding use utf-8...
it make my solr work well,,, i use chinese
--
regards
j.L
check procedure:
1: rm -r $tomcat/webapps/*
2: rm -r $solr/data ,,,ur index data directory
3: check xml(any xml u modified)
4: start tomcat
i had same error, but i forgot how to fix...so u can use my check procedure,
i think it will help you
i use tomcat+solr in win2003, freebsd, mac osx 10.5.5,
i find url not same as the others
--
regards
j.L
first, u should escape some string like (code by php)
> function escapeChars($string) {
>
$string = str_replace("&", "&", $string);
$string = str_replace("<", "<", $string);
$string = str_replace(">", ">", $string);
$string = str_replace("'", "'", $string);
$string = str_replace('"', """, $str
ecific reason why the CJK analyzers in Solr
> >were > chosen to be >> n-gram based instead of
> >it being a morphological analyzer which is >
> >kind of >> implemented in Google as it
> >considered to be more effective than the >
> >n-gram >> ones? &g
lly be indexing millions of documents.
>
> James,
>
> We would have a look at hylanda too. What abt japanese and korean
> analyzers,
> any recommendations?
>
> - Eswar
>
> On Nov 27, 2007 7:21 AM, James liu <[EMAIL PROTECTED]> wrote:
>
> > I don'
if ur analyzer is standard, u can try use tokenize.(u can find the answer
from analyzer source code and schema.xml)
On Nov 27, 2007 9:39 AM, zx zhang <[EMAIL PROTECTED]> wrote:
> lance,
>
> The following is a instance schema fieldtype using solr1.2 and CJK
> package.
> And it works. As you said,
I don't think NGram is good method for Chinese.
CJKAnalyzer of Lucene is 2-Gram.
Eswar K:
if it is chinese analyzer,,i recommend hylanda(www.hylanda.com),,,it is
the best chinese analyzer and it not free.
if u wanna free chinese analyzer, maybe u can try je-analyzer. it have
some problem when
if u use tomcat,,,it default port: 8080 and other default port.
so u just use other tomcat which use 8181 and other port...(i remember u
should modify three port(one tomcat) )
I used to have four tomcat in One SERVER.
On Nov 9, 2007 7:39 AM, Isart Montane <[EMAIL PROTECTED]> wrote:
> Hi all,
>
if I understand correct,,u just do it like that:(i use php)
$data1 = getDataFromInstance1($url);
$data2 = getDataFromInstance2($url);
it just have multi solr Instance. and getData from the distance.
On Nov 12, 2007 11:15 PM, Dilip.TS <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Does SOLR supports
Thks everybody who give me help.
especial Dave, thk u.
On Nov 8, 2007 11:21 AM, James liu <[EMAIL PROTECTED]> wrote:
> hmm
>
> i find error,,,that is my error not about php and phps ..
>
> i use old config to testso config have a problem..
>
> that is Title i
hmm
i find error,,,that is my error not about php and phps ..
i use old config to testso config have a problem..
that is Title i use double as its type...it should use text.
On Nov 8, 2007 10:29 AM, James liu <[EMAIL PROTECTED]> wrote:
> php now is ok..
>
>
;q";s:1:"2";s:2:"wt";s:4:"phps";s:4:"rows";a:2:{i:0;s:1:"2";i:1;s:2:"10";}s:7:"version";s:3:"
> 2.2";}}s:8:"response";a:3:{s:8:"numFound";i:28;s:5:"start";i:0;s:4:"docs";a:2:{i
i just decrease answer information...and u will see my result(full, not
part)
*before unserialize*
> string(433)
> "a:2:{s:14:"responseHeader";a:3:{s:6:"status";i:0;s:5:"QTime";i:0;s:6:"params";a:7:{s:2:"fl";s:5:"Title";s:6:"indent";s:2:"on";s:5:"start";s:1:"0";s:1:"q";s:1:"2";s:2:"wt";s:4:"ph
same answer.
On Nov 7, 2007 11:41 AM, James liu <[EMAIL PROTECTED]> wrote:
> afternoon,,i will update svn...and try the newest...
>
>
>
>
> On Nov 7, 2007 11:23 AM, Dave Lewis <[EMAIL PROTECTED]> wrote:
>
> >
> > On Nov 6, 2007, at 8:10 PM, James li
afternoon,,i will update svn...and try the newest...
On Nov 7, 2007 11:23 AM, Dave Lewis <[EMAIL PROTECTED]> wrote:
>
> On Nov 6, 2007, at 8:10 PM, James liu wrote:
>
> > first var_dump result(part not all):
> >
> > string(50506)
> >> "a:2:{
t;2";s:2:"wt";s:4:"phps";s:4:"rows";s:2:"10";s:7:"version";s:3:"
> 2.2";}}
>
two var_dump result:
bool(false)
On Nov 6, 2007 10:36 PM, Dave Lewis <[EMAIL PROTECTED]> wrote:
> What are the results of the two var_dumps?
p($a);
$a = unserialize($a);
echo 'after unserialize...';
var_dump($a);
?>*
*
On 11/6/07, Stu Hood <[EMAIL PROTECTED]> wrote:
>
> Did you enable the PHP serialized response writer in your solrconfig.xml?
> It is not enabled by default.
>
> Thanks,
> Stu
>
>
i know it...but u try it,,u will find simlar question.
On 11/5/07, Robert Young <[EMAIL PROTECTED]> wrote:
>
> I would imagine you have to unserialize
>
> On 11/5/07, James liu <[EMAIL PROTECTED]> wrote:
> > i find they all return string
> >
> > >
i find they all return string
http://localhost:8080/solr/select/?q=solr&version=2.2&start=0&rows=10&indent=on&wt=php
';
var_dump(file_get_contents($url);
?>
--
regards
jl
if u rebuild solr , safe method is rm -r *tomcat*/webapps/*.
2007/11/1, Chris Hostetter <[EMAIL PROTECTED]>:
>
>
> : Is there an easy to find out which version of solr is running. I
> installed
> : solr 1.2 and set up an instance using Tomcat. It was successful before.
>
> FYI: starting a while b
where i can read 1.3 new features?
2007/10/26, Venkatraman S <[EMAIL PROTECTED]>:
>
> On 10/26/07, Mike Klaas <[EMAIL PROTECTED]> wrote:
> >
> > If we did a 1.2.x, it shoud (imo) contain no new features, only
> > important bugfixes.
>
>
> I have been having a look at the trunk for quite sometime n
i find it happen when it do commit.
i use solr 1.2 release.
i use crontab to do index work.
2007/10/15, James liu <[EMAIL PROTECTED]>:
>
> i have 40 instances,,,one instance lost segments* file(happen after commit
> and optimize)
>
> anyone have similar problem?
>
i have 40 instances,,,one instance lost segments* file(happen after commit
and optimize)
anyone have similar problem?
can i fix this problem?
can i recovery this instance data?
--
regards
jl
there.
>
> Otis
>
> --
>
> Lucene - Solr - Nutch - Consulting -- http://sematext.com/
>
>
>
>
> - Original Message
> From: James liu <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Tuesday, October 9, 2007 11:15:56 PM
> Subject: i
i just wanna know is it exist which can decrease index size,,not by
increasing hardware or optimizing lucene params.
--
regards
jl
* *
*i think text not need "stored='true'" unless u will show it.(it will help u
decrease index size and not affect search )*
*index and search use same box? if it is true, u should moniter search
response time when indexing.(include CPU, RAM change)*
*i have similar problem and i increase JVM s
i can't download it from http://jetty.mortbay.org/jetty5/plus/index.html
--
regards
jl
if use multi solr with one index, it will cache individually.
so i think can it share their cache.(they have same config)
--
regards
jl
to accomplish.
>
> Thanks,
> Grant
>
> On Sep 23, 2007, at 10:38 AM, James liu wrote:
>
> > i wanna do it.
> >
> > Maybe someone did it, if so, give me some tips.
> >
> > thks
> >
> > --
> > regards
> > jl
>
> ---
to see them immediately, or just the current user?
> >
> > We can better help you if you give us more details on what you are
> > trying to accomplish.
> >
> > Thanks,
> > Grant
> >
> > On Sep 23, 2007, at 10:38 AM, Jam
i wanna do it.
Maybe someone did it, if so, give me some tips.
thks
--
regards
jl
thks ,ryan.
2007/9/10, Ryan McKinley <[EMAIL PROTECTED]>:
>
> James liu wrote:
> > i wanna try patch:
> >
> https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
> >
> > and i download solr1.2 r
i wanna try patch:
https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
and i download solr1.2 release
patch < SOLR-269*.pach(when in
'/tmp/apache-solr-1.2.0/src/test/org/apache/solr/update'
)
it show me
|Index: src/test/org/apache
OK...I see...thk u ,mike.
2007/8/31, Mike Klaas <[EMAIL PROTECTED]>:
>
>
> On 29-Aug-07, at 10:21 PM, James liu wrote:
>
> > Does it affect with doc size?
> >
> > for example 2 billion docs, 10k doc2 billion docs, but doc size
> > is 10m.
>
>
Does it affect with doc size?
for example 2 billion docs, 10k doc2 billion docs, but doc size is 10m.
2007/8/30, Mike Klaas <[EMAIL PROTECTED]>:
>
> 2 billion docs (signed int).
>
> On 29-Aug-07, at 6:24 PM, James liu wrote:
>
> > what is the limits for Lucene an
what is the limits for Lucene and Solr.
100m, 1000m, 5000m or other number docs?
2007/8/24, Walter Underwood <[EMAIL PROTECTED]>:
>
> It should work fine to index them and search them. 13 million docs is
> not even close to the limits for Lucene and Solr. Have you had problems?
>
> wunder
>
> On
Lucene is a search library, Solr is a search server that uses
> Lucene.
>
> Cheers,
> Grant
>
> On Aug 8, 2007, at 2:57 AM, James liu wrote:
>
> > if i wanna calc it by my method, something i should notice ?
> >
> > anyone did it?
> >
> >
> &g
if i wanna calc it by my method, something i should notice ?
anyone did it?
--
regards
jl
fieldset "topic" indexed='false' and stored='true'
i don't know why it will be analyzed?
now i wanna it only store not analyzed,,,how can i do?
--
regards
jl
I correct it,,,i index 17M docs. not 1.7M,,,so OutOfMemory happen when it
finish index ~11.3m docs
It is new index.
i think it maybe the reason:
On 7/18/07, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> Why? Too small of a Java heap. :)
> Increase the size of the Java heap and lower the maxBu
when i index 1.7m docs and 4k-5k per doc.
OutOfMemory happen when it finish index ~1.13m docs
I just restart tomcat , delete all lock and restart do index.
No error or warning infor until it finish.
anyone know why? or have the same error?
--
regards
jl
2007/7/18, Ryan McKinley <[EMAIL PROTECTED]>:
Xuesong Luo wrote:
> Hi, there,
> We have one master server and multiple slave servers. The multiple slave
> servers can be run either on the same box or different boxes. For
> slaves on the same box, is there any best practice that they should use
u can find configuration datadir in solrconfig.xml(solr 1.2)
2007/7/10, nithyavembu <[EMAIL PROTECTED]>:
Hi,
I tried as you said and got the result without any error. So we can make
the solr home anywhere. But we have to give the path correctly in solr.xml
.
Am i correct?
Now i am one step f
I use freebsd.
2007/6/16, Yonik Seeley <[EMAIL PROTECTED]>:
On 6/14/07, James liu <[EMAIL PROTECTED]> wrote:
> I just timing my script to get data from 2 solr boxes, not complete
script.
> It just query two box and return id,score .rows=10. response type use
json.
>
&g
ized
output from solr) result so I can test?
Thanks
-Nick
On 6/28/07, James liu <[EMAIL PROTECTED]> wrote:
> code not change,,,and i not use utf8_decodeshould do it?
>
> 2007/6/28, Nick Jenkin <[EMAIL PROTECTED]>:
> >
> > Hi James
> > It is totally no
i means define it in schema.xml,,,
--
regards
jl
code not change,,,and i not use utf8_decodeshould do it?
2007/6/28, Nick Jenkin <[EMAIL PROTECTED]>:
Hi James
It is totally not optimized, when you say change your content into
???, I assume this is because of UTF8 issues, are you using
utf8_decode etc?
Thanks
-Nick
On 6/28/07, Jam
It is slower than json and xml,,,and it will change my content into ???
when i use json , content is ok.
afternoon, iwill read ur code.
2007/6/27, James liu <[EMAIL PROTECTED]>:
ok,,thks nick,,,i just forget replace jar file..
wait a minute i will test speed...
2007/6/27, Nick
ok,,thks nick,,,i just forget replace jar file..
wait a minute i will test speed...
2007/6/27, Nick Jenkin <[EMAIL PROTECTED]>:
http://nickjenkin.com/misc/apache-solr-1.2.0-php-serialize.tar.gz
Try that
-Nick
On 6/27/07, James liu <[EMAIL PROTECTED]> wrote:
> i use tomcat
i use tomcat ,, send ur solr version to me...i try it again..
2007/6/27, Nick Jenkin <[EMAIL PROTECTED]>:
If you are using the example provided in 1.2 (using jetty) you need to
use "ant example"
rather than "ant dist"
-Nick
On 6/27/07, James liu <[EMAIL PROTEC
how about its performance?
2007/6/26, Kijiji Xu, Ping <[EMAIL PROTECTED]>:
I had solved this problem,below is my POST code,I used HTTP_Request of
PEAR,it's so simple.thank you all very much .FYI;
private function doPost($url,$postData){
$req = &new HTTP_Request($url,array(
'm
XML data should bigger thant JSON data, and transfer quicker than JSON..
it surprised me.
2007/6/27, Yonik Seeley <[EMAIL PROTECTED]>:
It would be helpful if you could try out the patch at
https://issues.apache.org/jira/browse/SOLR-276
-Yonik
On 6/26/07, Yonik Seeley <[EMAIL PROTECTED]> wro
very strange ,only me fail? anyone have same question?
if free, maybe u zip your solr to me by mail...and i try it again.
2007/6/26, Nick Jenkin <[EMAIL PROTECTED]>:
Interesting, what version of solr are you using, I tested on 1.2.
-Nick
On 6/26/07, James liu <[EMAIL PROTECTED]> wro
2007/6/27, Mike Klaas <[EMAIL PROTECTED]>:
On 25-Jun-07, at 10:53 PM, James liu wrote:
>
> [quote]how can i use index all with ram and how to config which ram
> i should
> use?[/quote]
Your os will automatically load the most frequently-used parts of the
index in ram.
If
first try it? which system u use?
if u use freebsd, just give up trying. it not fit for freebsd.
2007/6/27, Otis Gospodnetic <[EMAIL PROTECTED]>:
Hi,
Here is a puzzling one. I can't get Solr to invoke snaphooter
properly. Solr claims my snapshooter is not where I said it is:
SEVERE: java.
<[EMAIL PROTECTED]>:
I have some good news :o)
https://issues.apache.org/jira/browse/SOLR-275
Please let me know if you find any bugs
Thanks
-Nick
On 6/26/07, James liu <[EMAIL PROTECTED]> wrote:
> I think it simple to u.
>
> so i wait for ur good news.
>
> 200
thks Yonik,and
[quote]how can i use index all with ram and how to config which ram i should
use?[/quote]
t; > Hi James
> > > I think you would be better of outputting an PHP array, and running
> > > eval() over it, the PHP serialize format is quite complicated.
> > >
> > > On that note, you might be interested in:
> > > http://issues.apache.org/
10m docs and 4k/doc, 1m docs and 40k/doc
which will fast in same environment?
--
regards
jl
for example, i wanna sort by datetime, does it have to be store='true', and
i wanna define it
am i right?
if right, iwanna define score like that and how to define it or maybe it was
if my field all use index=true, stored=false, does it means low disk
io and more ram used?
how can i use it
SolrIndexSearcher, and i no change it.
2007/6/25, James liu <[EMAIL PROTECTED]>:
I means how to add it to my solr(1.2 production)
2007/6/25, James liu <[EMAIL PROTECTED]>:
>
> aha,,it seems good, how can i fix it with my solr, i don't know how do
> with it
>
>
&
I means how to add it to my solr(1.2 production)
2007/6/25, James liu <[EMAIL PROTECTED]>:
aha,,it seems good, how can i fix it with my solr, i don't know how do
with it
2007/6/25, Nick Jenkin <[EMAIL PROTECTED]>:
>
> Hi James
> I think you would be better of ou
ou might be interested in:
http://issues.apache.org/jira/browse/SOLR-196
-Nick
On 6/25/07, James liu <[EMAIL PROTECTED]> wrote:
> which files i should change from source?
>
> and if i change ok.
>
> how to compile? just ant dist?
>
> --
> regards
> jl
>
--
regards
jl
which files i should change from source?
and if i change ok.
how to compile? just ant dist?
--
regards
jl
aha,,same question i found few days ago.
i m sorry to forget submit it.
2007/6/22, Yonik Seeley <[EMAIL PROTECTED]>:
On 6/21/07, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> I just started running the scripts and
>
> The commit script seems to run fine, but it says there was an error. I
> looke
aha,,sorry,i miss it.
2007/6/21, Chris Hostetter <[EMAIL PROTECTED]>:
: curl http://192.168.7.6:8080/solr0/update --data-binary
: 'nodeid:20'
:
: i remember it is ok when i use solr 1.1
...
: HTTP Status 400 - missing content stream
please note the "Upgrading from Solr 1.1" section o
solr:1.2
curl http://192.168.7.6:8080/solr0/update --data-binary
'nodeid:20'
i remember it is ok when i use solr 1.1
does it change?
it show me:
HTTP Status 400 - missing content stream
--
*type* Status report
*message* *missing content stream*
*description* *The
I see SOLR-215 from this mail.
Does it now really support multi index and search it will return merged
data?
for example:
i wanna search: aaa, and i have index1, index2, index3, index4it should
return the result from index1,index2,index3, index4 and merge result by
score, datetime, or other
If just one master or one slave server fail, i think u maybe can use master
index server.
shell controlled by program is easy for me. i use php and shell_exec.
2007/6/21, Otis Gospodnetic <[EMAIL PROTECTED]>:
Right, that SAN con 2 Masters sounds good. Lucky you with your lonely
Master! Wh
ok, i find it only happen in win.
2007/6/19, James liu <[EMAIL PROTECTED]>:
It seems strange when i refresh same url search.
time will change...sometime use *0.01021409034729 s, *sometime use *
0.0080091953277588 s.
*sometime use *0.024219989776611 .
It change too big.
*
Only i use
It seems strange when i refresh same url search.
time will change...sometime use *0.01021409034729 s, *sometime use *
0.0080091953277588 s.
*sometime use *0.024219989776611.
It change too big.
*
Only i use it and less search, so i think memory not all use.
why time changed very big, and i thi
thks.
2007/6/17, Yonik Seeley <[EMAIL PROTECTED]>:
On 6/16/07, James liu <[EMAIL PROTECTED]> wrote:
> i wanna show keyword: a and facet sid: 2
>
> my url:
>
http://localhost:8080/solr1/select?q=a+sid:2&start=0&rows=10&fl=*&wt=json
>
> but it show m
for example.
i wanna show keyword: a and facet sid: 2
my url:
http://localhost:8080/solr1/select?q=a+sid:2&start=0&rows=10&fl=*&wt=json
but it show me count bigger than facetnum.
i read http://lucene.apache.org/java/docs/queryparsersyntax.html
and try server way , all not effect.
maybe some
maybe u will find it in *apache-solr-1.2.0\example\logs*
and I not use jetty.
2007/6/15, Jack L <[EMAIL PROTECTED]>:
Yeah, I'm running 1.1 with jetty.
But I didn't find *.log in the whole solr directory.
Is jetty putting the log files outside the directory?
> what version of solr/container a
if u use jetty, u should see jetty's log.
if u use tomcat, u should see tomcat's log.
solr is only a program that run with container.
2007/6/15, Ryan McKinley <[EMAIL PROTECTED]>:
what version of solr/container are you running?
this sounds similar to what people running solr 1.1 with the je
2007/6/14, Yonik Seeley <[EMAIL PROTECTED]>:
On 6/14/07, James liu <[EMAIL PROTECTED]> wrote:
> i write script to get run time to sure how to performance.
>
> i find very intresting thing that i query 2 solr box to get data and
solr
> response show me qtime all zero.
is it ok?
2007/6/14, vanderkerkoff <[EMAIL PROTECTED]>:
Hi Yonik
Here's the output from netcat
POST /solr/update HTTP/1.1
Host: localhost:8983
Accept-Encoding: identity
Content-Length: 83
Content-Type: text/xml; charset=utf-8
that looks Ok to me, but I am a bit twp you see.
:-)
Yonik Seel
i write script to get run time to sure how to performance.
i find very intresting thing that i query 2 solr box to get data and solr
response show me qtime all zero.
but i find multi get data script use time is 0.046674966812134(it will
change)
solr box in my pc. and index data is very small.
2007/6/7, Yonik Seeley <[EMAIL PROTECTED]>:
On 6/6/07, James liu <[EMAIL PROTECTED]> wrote:
> anyone agree?
No ;-)
At least not if you mean using map-reduce for queries.
When I started looking at distributed search, I immediately went and
read the map-reduce paper (easier
anyone agree?
Next solr's development 's plan is? anyone know?
--
regards
jl
thks,ryan, i find "required" in changes.txt
2007/6/4, Ryan McKinley < [EMAIL PROTECTED]>:
>
> i modifiy it and now start is ok
>
>>
>>
>
> property required means?
> i not find it in comment.
>
"required" means that the field *must* be specified when you add it to
the index. If it i
solr 1.3dev 2007-06-04(svn)
tomcat log show me error information:
solr 1.3dev 2007-06-04
org.apache.solr.core.SolrException: Unknown fieldtype 'string'
i find it only use in shema.xml
i modifiy it and now start is ok
property required means?
i not find it in comment.
--
thks Solr Committers
--
regards
jl
2007/5/29, Chris Hostetter <[EMAIL PROTECTED]>:
: > facet.analyzer is true, do analyze, if false don't analyze.
: What if Solr doesn't have access to the unindexed version? My
: suggestion would be to copyField into an unanalyzed version, and
: facet on that.
me too.
yeah, i'm not even su
1 - 100 of 257 matches
Mail list logo