I suspect nobody wants to broach this topic, this has to have come up before,
but I can not find an authoritative answer. How does the Standard Query Parser
evaluate boolean expressions? I have three fields, content, status and
source_name. The expression
content:bement AND status:relevant
yie
here is one) and/or 8.0. You may be
>> able to use the patch there to see if there are gaps or bugs that could be
>> fixed before 7.7 / 8.0.
>>
>> Jason, who did the work on that issue, also presented on SolrJ at the
>> Activate conference, you may find it interestin
Hi Shawn, thanks for the prompt reply!
> On Nov 29, 2018, at 4:55 PM, Shawn Heisey wrote:
>
> On 11/29/2018 2:01 PM, Thomas L. Redman wrote:
>> Hi! I am wanting to do nested facets/Grouping/Expand-Collapse using SolrJ,
>> and I can find no API for that. I see I can add a
Hi! I am wanting to do nested facets/Grouping/Expand-Collapse using SolrJ, and
I can find no API for that. I see I can add a pivot field, I guess to a query
in general, but that doesn’t seem to work at all, I get an NPE. The
documentation on SolrJ is sorely lacking, the documentation I have foun
Additionally, it looks like the commits are public on github. Is this
backported to 5.5.x too? Users that are still on 5x might want to backport
some of the issues themselves since is not officially supported anymore.
On Mon, Oct 16, 2017 at 10:11 AM Mike Drob wrote:
> Given that the already pub
TqOXldY2aQti2VNXYWPtqa1bUKE6MA9VrIJfU&m=iYU948dQo6G0tKFQUguY6SHOZNZoCOEAEv1sCf4ukcA&s=HvPPQL--s3bFtNyBdUiz1hNIqfLEVrb4Cu-HIC71dKY&e=
if i'm not mistaken?
-Stefan
On Sep 27, 2017 8:20 PM, "Wayne L. Johnson"
wrote:
> I’m testing Solr 7.0.0. When I start with an emp
I'm testing Solr 7.0.0. When I start with an empty index, Solr comes up just
fine, I can add documents and query documents. However when I start with an
already-populated set of documents (from 6.5.0), Solr will not start. The
relevant portion of the traceback seems to be:
Caused by: java.la
Can someone point me to a tutorial or blog to setup SolrCloud on multiple
hosts? LucidWorks just have a trivial single host example. I searched
around but only found some blogs for older versions (2014 or earlier).
thanks.
Hi all,
Hoping someone else uses the maven capabilities and can help out here.
Solr: 4.10.4
Ant-Task: ant generate-maven-artifacts
Problem:
When trying to publish to an internal artifactory using our SNAPSHOTs,
where our user has update/delete permissions, everything builds ok.
When trying to bu
Caches are stored on the Java heap for each instance of a searcher. The
filter cache would be different per replica, same for the doc cache, and
query cache
On Fri, Feb 5, 2016 at 8:47 AM Tom Evans wrote:
> I have a small question about fq in cloud mode that I couldn't find an
> explanation for
to add to Ericks point:
It's also highly dependent on the types of queries you expect (sorting,
faceting, fq, q, size of documents) and how many concurrent updates you
expect. If most queries are going to be similar and you are not going to be
updating very often, you can expect most of your index
Using:
- JDK 1.8u40
- UseG1GC, ParallelRefProcEnabled, Xmx12g,Xms12g
- Solr 4.10.4
When using G1GC we are seeing very high processing times in the GC Remark
phase during reference processing. Originally we saw high times during
WeakReference processing but adding"-XX:+ParallelRefProcEnabled" flag
categories and can maintain the
hierarchy..
I'll take a look at it.
Thanks!
From: Erick Erickson
To: solr-user@lucene.apache.org; Mike L.
Sent: Monday, July 6, 2015 12:42 PM
Subject: Re: Category Hierarchy on Dynamic Fields - Solr 4.10
Hmmm, probably missing something her
Solr User Group -
Was wondering if anybody had any suggestions/best practices around a
requirement for storing a dynamic category structure that needs to have the
ability to facet on and maintain its hierarchy
Some context:
A product could belong to an undetermined amount of product categor
Are there any known network issues?
> * Do you have any idea about the GC on those replicas?
>
>
> On Mon, Apr 27, 2015 at 1:25 PM, Amit L wrote:
>
> > Hi,
> >
> > A few days ago I deployed a solr 4.9.0 cluster, which consists of 2
> > collections. Each collection
Hi,
A few days ago I deployed a solr 4.9.0 cluster, which consists of 2
collections. Each collection has 1 shard with 3 replicates on 3 different
machines.
On the first day I noticed this error appear on the leader. Full Log -
http://pastebin.com/wcPMZb0s
4/23/2015, 2:34:37 PM SEVERE SolrCmdDist
Thanks Jack. I'll give that a whirl.
From: Jack Krupansky
To: solr-user@lucene.apache.org; Mike L.
Sent: Saturday, April 11, 2015 12:04 PM
Subject: Re: Bq Question - Solr 4.10
It all depends on what you want your scores to look like. Or do you care at all
what the scores
Hello -
I have qf boosting setup and that works well and balanced across different
fields.
However, I have a requirement that if a particular manufacturer is part of the
returned matched documents (say top 20 results) , all those matched docs from
that manufacturer should be bumped to the
Typo: *even when the user delimits with a space. (e.g. base ball should find
baseball).
Thanks,
From: Mike L.
To: "solr-user@lucene.apache.org"
Sent: Tuesday, April 7, 2015 9:05 AM
Subject: DictionaryCompoundWordTokenFilterFactory - Dictionary/Compound-Words
File
Solr User Group -
I have a case where I need to be able to search against compound words, even
when the user delimits with a space. (e.g. baseball => base ball). I think
I've solved this by creating a compound-words dictionary file containing the
split words that I would want DictionaryCom
rom: Jack Krupansky
To: solr-user@lucene.apache.org; Mike L.
Sent: Sunday, April 5, 2015 8:23 AM
Subject: Re: WordDelimiterFilterFactory - tokenizer question
You have to tell the filter what types of tokens to generate - words, numbers.
You told it to generate... nothing. You did te
Solr User Group,
I have a non-multivalied field with contains stored values similar to this:
US100AUS100BUS100CUS100-DUS100BBA
My assumption is - If I tokenized with the below fieldType definition,
specifically the WDF -splitOnNumbers and the LowerCaseFilterFactory would have
have provided
Hi Dmitri,
I do have a question mark in my search. I see that I dropped that
accidentally when I was copying/pasting/formatting the details.
My curl command is curl "http://myserver/myapp/myproduct?fl=*,.";
And, it works fine whether I have .../myproduct/?fl=*, or if I leave out
the / b
It was pilot error. I just reviewed my servlet and noticed a parameter in
web.xml that was looking to find data for the new product in the production
index which doesn't have that data yet while my curl command was running
against the staging index. I rebuilt the servlet with the fixed parameter
an
I'm stumped. I've got some solrj 3.6.1 code that works fine against three of
my request handlers but not the fourth. The very odd thing is that I have no
trouble retrieving results with curl against all of the result handlers.
My solrj code sets some parameters:
ModifiableSolrParams param
Hi,
I want to add solr source to eclipse by link instruction:
http://wise.99acres.com/tutorials/tutorial-solr-code-run-from-eclipse-wtp-tomcat/,
but cannot success. (fail at step 7)
This is error message
C:\Users\anhletung\Downloads\solr-4.10.0-src\solr-4.10.0\build.xml:111: The
foll
owing error
://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 5 December 2014 at 14:07, Min L wrote:
> > Hi all:
> >
> > My code using solr spellchecker to suggest keywords worked fine locally,
> > howeve
G.info("Stored suggest data to: " + target.getAbsolutePath());
}
}
On Fri, Dec 5, 2014 at 12:59 PM, Erick Erickson
wrote:
> What's the rest of the stack trace? There should
> be a root cause somewhere.
>
> Best,
> Erick
>
> On Fri, Dec 5, 2014 at 11:07 AM
Hi all:
My code using solr spellchecker to suggest keywords worked fine locally,
however in qa solr env, it failed to build it with the following error in
solr log:
ERROR Suggester Store Lookup build from index on field: myfieldname failed
reader has: xxx docs
I checked the solr directory and th
Hi all:
Has anyone made solr Morelikethis working with results grouping?
Thanks in advance.
M
I was using the SOLR administrative interface to issue my queries. When I
bypass the administrative interface and go directly to SOLR, the JSON return
indicates the AID is as it should be. The issue is in the presentation layer of
the Solr Admin UI. Which is good news.
Thanks all, my bad. Shoul
I believe I have encountered a bug in SOLR. I have a data type defined as
follows:
I have not been able to reproduce this problem for smaller numbers, but for
some of the very large numbers, the value that gets stored for this “aid” field
is not the same as the number that gets indexed. For e
ing this issue
>
> On Wed, Nov 5, 2014 at 4:51 PM, Alan Woodward wrote:
>
> > Hi Min,
> >
> > Do you have the specific bit of text that caused this exception to be
> > thrown?
> >
> > Alan Woodward
> > www.flax.co.uk
> >
> >
> > O
Hi All:
I am using solr 4.9.1. and trying to use PostingsSolrHighlighter. But I got
errors during indexing. I thought LUCENE-5111 has fixed issues with
WordDelimitedFilter. The error is as below:
Caused by: java.lang.IllegalArgumentException: startOffset must be
non-negative, and endOffset must b
Appreciate all the support and I'll give it a whirl. Cheers!
Sent from my iPhone
> On Feb 8, 2014, at 4:25 PM, Shawn Heisey wrote:
>
>> On 2/8/2014 12:12 PM, Mike L. wrote:
>> Im going to try loading all 3000 fields in the schema and see how that goes.
>>
t; fielda_value, fieldb_value into a single field. Then do the right thing
> when searching. Watch tokenization though.
>
> Best
> Erick
>> On Feb 5, 2014 4:59 AM, "Mike L." wrote:
>>
>>
>> Thanks Shawn. This is good to know.
>>
>>
>>
Thanks Shawn. This is good to know.
Sent from my iPhone
> On Feb 5, 2014, at 12:53 AM, Shawn Heisey wrote:
>
>> On 2/4/2014 8:00 PM, Mike L. wrote:
>> I'm just wondering here if there is any defined limit to how many fields can
>> be created within a schem
al (95th
> percentile) query?
>
> -- Jack Krupansky
>
> -----Original Message- From: Mike L.
> Sent: Tuesday, February 4, 2014 10:00 PM
> To: solr-user@lucene.apache.org
> Subject: Max Limit to Schema Fields - Solr 4.X
>
>
> solr user group -
>
> I&
solr user group -
I'm afraid I may have a scenario where I might need to define a few
thousand fields in Solr. The context here is, this type of data is extremely
granular and unfortunately cannot be grouped into logical groupings or
aggregate fields because there is a need to know which
Mike L. wrote:
>
> Solr Admins,
>
> I've been using Solr for the last couple years and would like to
>contribute to this awesome project. Can I be added to the Contributorsgroup
>with also access to update the Wiki?
>
> Thanks in advance.
>
> Mike L.
>
>
Solr Admins,
I've been using Solr for the last couple years and would like to
contribute to this awesome project. Can I be added to the Contributorsgroup
with also access to update the Wiki?
Thanks in advance.
Mike L.
Nevermind, I figured it out. Excel was applying a hidden quote on the data.
Thanks anyway.
From: Mike L.
To: "solr-user@lucene.apache.org"
Sent: Wednesday, September 25, 2013 11:32 AM
Subject: Solr 4.4 Import from CSV to Multi-value field - Adds quote on last
value
S
Solr Family,
I'm a Solr 3.6 user who just pulled down 4.4 yesterday and noticed
something a bit odd when importing into a multi-valued field. I wouldn't be
surprised if there's a user-error on my end but hopefully there isn't a bug.
Here's the situation.
I created some test data to
Jack Krupansky-2 wrote
> Also, be aware that the spaces in your query need to be URL-encoded.
> Depending on how you are sending the command, you may have to do that
> encoding yourself.
>
> -- Jack Krupansky
It's a good possibility that that's the problem. I've been doing queries in
different
Erick Erickson wrote
> What do you get when you add &debugQuery=true? That should show you the
> results of the query parsing, which often adds clues.
>
> FWIW,
> Erick
When I was trying to debug this last night I noticed that when I added
"&debugQuery=true" to queries I would only get debug outp
Jack Krupansky-2 wrote
> What query parser and release of Solr are you using?
>
> There was a bug at one point where a fielded term immediately after a left
> parenthesis was not handled properly.
>
> If I recall, just insert a space after the left parenthesis.
>
> Also, the dismax query parser
When I do this query:
q=catcode:CC001
I get a bunch of results. One of them looks like this:
CC001
Cooper, John
If I then do this query:
q=start_url_title:cooper
I also match the record above, as expected.
But, if I do this:
q=(catcode:CC001 AND start_u
Solr User Group,
I would like to return a hierarchical data relationship when somebody
queries for a parent doc in solr. This sort of relationship doesn't currently
exist in our core as the use-case has been to search for a specific document
only. However, here's kind of an example
requests on
different data files containing different data?
Thanks in advance. This was very helpful.
Mike
From: Shawn Heisey
To: solr-user@lucene.apache.org
Sent: Monday, July 1, 2013 2:30 PM
Subject: Re: FileDataSource vs JdbcDataSouce (speed) Solr 3.5
On 7/1/
s not doing what its supposed to or am I missing
something? I also trying passing a commit afterward like this:
http://server:port/appname/solrcore/update?stream.body=%3Ccommit/%3E ( didn't
seem to do anything either)
From: Ahmet Arslan
To: "solr-user@lucene.apache.org" ; Mike L
I've been working on improving index time with a JdbcDataSource DIH based
config and found it not to be as performant as I'd hoped for, for various
reasons, not specifically due to solr. With that said, I decided to switch
gears a bit and test out FileDataSource setup... I assumed by eliminiat
have not completed the job quite yet with any config... I did get very close..
I'd hate to throw additional memory at the problem if there is something else I
can tweak..
Thanks!
Mike
From: Shawn Heisey
To: solr-user@lucene.apache.org
Sent: Wednesday, June 26, 2013 12:13
Hello,
I'm trying to execute a parallel DIH process and running into heap
related issues, hoping somebody has experienced this and can recommend some
options..
Using Solr 3.5 on CentOS.
Currently have JVM heap 4GB min , 8GB max
When executing the entities in a se
>
> Justin, can you tell us which field in the query is your record id? What is
> the record id's type in database and in solr schema? What is your unique
> key and its type in solr schema?
>
>
> On Tue, Mar 19, 2013 at 5:19 AM, Justin L. wrote:
>
> > Every time I
I'm just writing to close the loop on this issue.
I moved my servlet to a beefier server with lots of RAM. I also cleaned up
the data to make the index somewhat smaller. And, I turned off all the
caches since my application doesn't benefit very much from caching. My
application is now quite zippy,
p.s. Regarding streaming of the dat, my Java servlet uses solrj and iterates
through the results. Right now I'm focused on getting rid of the delay that
cause some queries to take 6 or 8 seconds to complete so I'm not even
looking at the performance of the streaming.
--
View this message in con
My virtual machine has 6GB of RAM. Tomcat is currently configured to use 4GB
of it. The size of the index is 5.4GB for 3 million records which averages
out to 1.8KB per record. I can look at trimming the data, having fewer
records in the index to make it smaller, or getting more memory for the VM.
I just did the experiment of retrieving only the metaDataUrl field. I still
sometimes get slow retrieval times. One query took 2.6 seconds of real time
to retrieve 80k of data. There were 500 results. QTime was 229. So, I do
need to track down where the extra 2+ seconds is going.
--
View this me
Thanks everyone for the responses.
I did some more queries and watched disk activity with iostat. Sure enough,
during some of the slow queries the disk was pegged at 100% (or more.)
The requirement for the app I'm building is to be able to retrieve 500
results in ideally one second. The index has
Sometimes when I use curl to query solr I get a slow real time response but a
short QTime.
Here's an example:
$ time curl "solrsandbox/testindex/select/?q=all:science,data&rows=500" >
foo
% Total% Received % Xferd Average Speed TimeTime Time
Current
It is not just one document that would be returned, it one document per
person. That is a little trickier.
- Original Message -
From: "Michael Sokolov"
To: solr-user@lucene.apache.org
Cc: "l blevins"
Sent: Wednesday, March 9, 2011 7:46:10 PM
Subject: Re:
- Forwarded Message -
From: "l blevins"
To: "solr user mail"
Sent: Wednesday, March 9, 2011 4:03:06 PM
Subject: some relational-type groupig with search
I have a large database for which we have some good search capabilties now, but
am interested to see if
Hi Ron,
In a nutshell - an indexed field is searchable, and a stored field has its
content stored in the index so it is retrievable. Here are some examples that
will hopefully give you a feel for how to set the indexed and stored options:
indexed="true" stored="true"
Use this for information yo
Shalin Shekhar Mangar wrote on 02/25/2010 07:38:39
AM:
> On Thu, Feb 25, 2010 at 5:34 PM, gunjan_versata
wrote:
>
> >
> > We are using SolrJ to handle commits to our solr server.. All runs
fine..
> > But whenever the commit happens, the server becomes slow and stops
> > responding.. therby result
Otis Gospodnetic wrote on 01/22/2010 12:20:45
AM:
> I'm missing the bigger context of this thread here, but from the
> snippet below - sure, commits cause in-memory index to get written
> to disk, that causes some IO, and that *could* affect search *if*
> queries are running on the same box. Wh
ysee...@gmail.com wrote on 01/20/2010 02:24:04 PM:
> On Wed, Jan 20, 2010 at 2:18 PM, Jerome L Quinn
wrote:
> > This is essentially the same problem I'm fighting with. Once in a
while,
> > commit
> > causes everything to freeze, causing add commands to timeout.
>
&g
ysee...@gmail.com wrote on 01/20/2010 02:24:04 PM:
> On Wed, Jan 20, 2010 at 2:18 PM, Jerome L Quinn
wrote:
> > This is essentially the same problem I'm fighting with. Once in a
while,
> > commit
> > causes everything to freeze, causing add commands to timeout.
>
&g
ysee...@gmail.com wrote on 01/19/2010 06:05:45 PM:
> On Tue, Jan 19, 2010 at 5:57 PM, Steve Conover
wrote:
> > I'm using latest solr 1.4 with java 1.6 on linux. I have a 3M
> > document index that's 10+GB. We currently give solr 12GB of ram to
> > play in and our machine has 32GB total.
> >
> >
Lance Norskog wrote on 01/16/2010 12:43:09 AM:
> If your indexing software does not have the ability to retry after a
> failure, you might with to change the timeout from 20 seconds to, say,
> 5 minutes.
I can make it retry, but I have somewhat real-time processes doing these
updates. Does an
Otis Gospodnetic wrote on 01/14/2010 10:07:15
PM:
> See those "waitFlush=true,waitSearcher=true" ? Do things improve if
> you make them false? (not sure how with autocommit without looking
> at the config and not sure if this makes a difference when
> autocommit triggers commits)
Looking at Dir
Hi, folks,
I am using Solr 1.3 pretty successfully, but am running into an issue that
hits once in a long while. I'm still using 1.3 since I have some custom
code I will have to port forward to 1.4.
My basic setup is that I have data sources continually pushing data into
Solr, around 20K adds
Hello,
Thanks. This would absolutely serve. I thought of doing it in
queryparser part which I mentioned in first mail. But if the query is
a complex one, then it would become a bit complicated. Thats why I
wanted to know whether there is any other way which is similar to the
second point
Hi,
I have explored "DisMaxRequestHandler". It could serve for some
of my purposes but not all.
1) It seems we have to decide that alternative field list beforehand
and declare them in the config.xml . But the field list for which
synonyms are to be considered is not definite ( at least in the
Hello All,
I have been trying to find out the right place to parse
the query submitted. To be brief, I need to expand the query. For
example.. let the query be
city:paris
then I would like to expand the query as .. follows
city:paris OR place:paris OR town:paris .
I gue
Hello all,
I am trying to use query parser plugin feature of solr.
But its really strange that everytime its behaving in a different way.
I have decalred my custom query parser in solrconfig.xml as follows..
I have linked it to the default request handler as follows..
Hello All,
I have been using apache-solr-common-1.3.0.jar in my module.
I am planning to shift to the latest version, because of course it has more
flexibility. But it is really strange that I dont find any corresponding jar
of the latest version. I have serached in total apachae sol
Howdy,
I recently rolled a custom WordNet synonym filter that pulls synonyms
from WordNet during indexing. All that is nice and dandy; however, it
causes problems in the sorting. Sometimes, the top match will come from
a synonym rather than the original word.
An example in our system is a se
Otis Gospodnetic wrote on 11/13/2009 11:15:43
PM:
> Let's take a step back. Why do you need to optimize? You said: "As
> long as I'm not optimizing, search and indexing times are
satisfactory." :)
>
> You don't need to optimize just because you are continuously adding
> and deleting documents
Lance Norskog wrote on 11/13/2009 11:18:42 PM:
> The 'maxSegments' feature is new with 1.4. I'm not sure that it will
> cause any less disk I/O during optimize.
It could still be useful to manage the "too many open files" problem that
rears its ugly head on occasion.
> The 'mergeFactor=2' id
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
> On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
> wrote:
> > I think we sorely need a Directory impl that down-prioritizes IO
> > performed by merging.
>
> It's unclear if this case is caused by IO contention, or the OS cache
> of the hot p
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
>
> On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
> wrote:
> > I think we sorely need a Directory impl that down-prioritizes IO
> > performed by merging.
>
> It's unclear if this case is caused by IO contention, or the OS cache
> of the hot
Mark Miller wrote on 11/12/2009 07:18:03 PM:
> Ah, the pains of optimization. Its kind of just how it is. One solution
> is to use two boxes and replication - optimize on the master, and then
> queries only hit the slave. Out of reach for some though, and adds many
> complications.
Yes, in my us
Hi, everyone, this is a problem I've had for quite a while,
and have basically avoided optimizing because of it. However,
eventually we will get to the point where we must delete as
well as add docs continuously.
I have a Solr 1.3 index with ~4M docs at around 90G. This is a single
instance run
Mark Miller wrote on 01/26/2009 04:30:00 PM:
> Just a point or I missed: with such a large index (not doc size large,
> but content wise), I imagine a lot of your 16GB of RAM is being used by
> the system disk cache - which is good. Another reason you don't want to
> give too much RAM to the JV
"Lance Norskog" wrote on 01/20/2009 02:16:47 AM:
> "Lance Norskog"
> 01/20/2009 02:16 AM
> Java 1.5 has thread-locking bugs. Switching to Java 1.6 may cure this
> problem.
Thanks for taking time to look at the problem. Unfortunately, this is
happening on Java 1.6,
so I can't put the blame t
uspect I'll add a watchdog, no matter what's causing the problem here.
> However, you should figure out why you are running out of memory. You
> don't want to use more resources than you have available if you can help
it.
Definitely. That's on the agenda :-)
Thanks,
Julian Davchev wrote on 01/20/2009 10:07:48 AM:
> Julian Davchev
> 01/20/2009 10:07 AM
>
> I get SEVERE: Lock obtain timed out
>
> Hi,
> Any documents or something I can read on how locks work and how I can
> controll it. When do they occur etc.
> Cause only way I got out of this mess was rest
Hi, all.
I'm running solr 1.3 inside Tomcat 6.0.18. I'm running a modified query
parser, tokenizer, highlighter, and have a CustomScoreQuery for dates.
After some amount of time, I see solr stop responding to update requests.
When crawling through the logs, I see the following pattern:
Jan 12,
Hi, all. Are there any plans for putting together a bugfix release? I'm
not looking for particular bugs, but would like to know if bug fixes are
only going to be done mixed in with new features.
Thanks,
Jerry Quinn
(I think the
> defaults here are non-alphanumeric chars).
>
> Take a look at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
> for more info on tokenizers and filters.
>
> cheers,
> Aleks
>
> On Tue, 18 Nov 2008 08:35:31 +0100, Carsten L <[EMAIL PR
the query looks like:
"name:(carsten) OR name:(carsten*) OR email:(carsten) OR
email:(carsten*) OR userid:(carsten) OR userid:(carsten*)"
Then it should match:
carsten l
carsten larsen
Carsten Larsen
Carsten
CARSTEN
etc.
And when the user enters the term: "carsten l&q
just rm -r SOLR_DIR/data/index.
2008/6/18 Mihails Agafonovs <[EMAIL PROTECTED]>:
> How can I clear the whole Solr index?
> Ar cieņu, Mihails
--
regards
j.L
On Thu, May 15, 2008 at 11:25 PM, Walter Underwood <[EMAIL PROTECTED]>
wrote:
> I've worked with the Basis products. Solid, good support.
> Last time I talked to them, they were working on hooking
> them into Lucene.
>
i don't know basis product. but i know google use it and in china,
google.cnno
can u talk about it ?
maybe i will use hadoop + solr.
thks for ur advice.
--
regards
j.L
I don't know the cost.
I know the bigger chinese search use it.
More chinese people who study and use full-text search think it is the best
chinese analyzer which u can buy.
Baidu(www.baidu.com), is the biggest chinese search, and googlechina is the
No 2.
Baidu not use it (http://www.hylanda.c
if u can read chinese and wanna write ur chinese-analyzer,,, maybe u can see
it http://www.googlechinablog.com/2006/04/blog-post_10.html
2008/5/15 j. L <[EMAIL PROTECTED]>:
> if commercial analyzers, i recommend
> http://www.hylanda.com/(it<http://www.hylanda.com/%28it>is
if commercial analyzers, i recommend http://www.hylanda.com/(it is the best
analyzer in chinese word)
On Thu, May 15, 2008 at 8:32 AM, j. L <[EMAIL PROTECTED]> wrote:
> u can try je-analyzer,,,i building 17m docs search site by solr and
> je-analyzer
>
>
> On Thu, Ma
u can try je-analyzer,,,i building 17m docs search site by solr and
je-analyzer
On Thu, May 15, 2008 at 6:44 AM, Walter Underwood <[EMAIL PROTECTED]>
wrote:
> N-gram works pretty well for Chinese, there are even studies to
> back that up.
>
> Do not use the N-gram matches for highlighting. They
2008/3/20 李银松 <[EMAIL PROTECTED]>:
> 1、When I set fl=score ,solr returns just as fl=*,score ,not just scores
> Is it a bug or just do it on purpose?
u can set fl=id,score, solr not support the style like fl=score
> My customer want to get the 1th-10010th added docs
> So I have to sort by t
because lucene 2.3.0 today released..
--
regards
j.L
1 - 100 of 165 matches
Mail list logo