hello,
I created index with 1.5m docs. When I am post query without facets it
returns in a moment.
When I post query with one facets it takes 14s.
0
14263
−
true
on
1
0
zamok
-1
wasCreatedBy_fct
10
2.2
when I add filter that returns only one docs it takes same time
0
13249
−
true
on
How many terms are in the wasCreatedBy_fct field? How is that field
and its type configured?
Solr 1.3? Or trunk? Trunk contains massive faceting speed
improvements.
Erik
On Mar 17, 2009, at 4:21 AM, pcurila wrote:
hello,
I created index with 1.5m docs. When I am post quer
Peter,
If possible try running a 1.4-snapshot of Solr, the faceting
improvements are quite remarkable.
However, if you can't run unreleased code, it might be an idea to try
reducing the number of unique terms (try indexing surnames only?).
Toby.
On 17 Mar 2009, at 10:01, pcurila wrote:
I
I am using 1.3
> How many terms are in the wasCreatedBy_fct field? How is that field
> and its type configured?
field contains author names and there are lots of them.
here is type configuration:
--
View this message in context:
http://www.nabble.com/why-is-query-so-slow-tp22
Hi,
I am implementing Lemmatisation in Solr, which means if user looks for
"Mouse" then it should display results of Mouse and Mice both. I understand
that this is something context search. I think of using synonym for this but
then synonyms.txt will be having so many records and this will keep o
Hi,
I am searching with any query string, which contains special characters like
è in it. for e.g. If I search for tèst then it shud return all the results
which contains tèst and test etc. There are other special characters also.
I have updated my server.xml file of tomcat server and included U
Hi,
I have a query like this
content:the AND iuser_id:5
which means return all docs of user id 5 which have the word "the" in
content .Since 'the' is a stop word ,this query executes as just user_id :5
inspite of the "AND" clause ,Whereas the expected result here is since there
is no result for
Victor,
I'd recommend look at the tutorial at http://lucene.apache.org/solr/tutorial.html
and using the list for more specific questions. Also, there a list
of companies (as well as mine!) that do support of Solr at http://wiki.apache.org/solr/Support
that eTrade can contract with to provi
Have you looked for any open source lemmatizers? I didn't find any in
a quick search, but there probably are some out there.
Also, is there a particular reason you are after lemmatization instead
of stemming? Maybe a "light" stemmer plus synonyms might suffice?
On Mar 17, 2009, at 6:02 AM
You will need to create a field that handles the accents in order to
do this. Start by looking at the ISOLatin1AccentFilter.
-Grant
On Mar 17, 2009, at 7:31 AM, dabboo wrote:
Hi,
I am searching with any query string, which contains special
characters like
è in it. for e.g. If I search f
stemming and synonyms are working fine in the application but these are
working individually. I guess I will need to add the values in synomyms.txt
to achieve it. Am I right?
Actually its the project requirement to implement the lemmatisation. I also
looked out for lemmatisation but couldnt get a
I have added this filter factory in my schema.xml also but still that is not
working. I am sorry but I didnt get as how to create the field to handle the
accents.
Please help.
Grant Ingersoll-6 wrote:
>
> You will need to create a field that handles the accents in order to
> do this. Start
This is the entry in schema.xml
dabboo wrote
Well, by definition, using an analyzer that removes stopwords
*should* do this at query time. This assumes that you used
an analyzer that removed stopwords at index and query time.
The stopwords are not in the index.
You can get the behavior you expect by using an analyzer at
query time that does
Did you reindex after you incorporated the ISOLatin... filter?
On Tue, Mar 17, 2009 at 8:40 AM, dabboo wrote:
>
> This is the entry in schema.xml
>
> omitNorms="true">
>
>
>
>
>
>ignoreCase="true"
>words="stopwords
I have the same question in mind. How can I configure the same standard request
handler to handle the spell check for given query?
I mean instead of calling
http://localhost:8983/solr/spellCheckCompRH?q=*:*&spellcheck.q=globl for
spelling checking the following query request
should take care of
How can I configure the same standard request handler to handle the spell check
for given query? I mean instead of calling
http://localhost:8983/solr/spellCheckCompRH?q=*:*&spellcheck.q=elepents for
spelling checking the following query request
should take care of both querying and spell checkin
Hi all,
I'd like to achieve the following:
When searching for e.g. two words, one of them being spelt correctly
the other one misspelt I'd like to receive results for the correct
word but would still like to get spelling suggestions for the wrong
word.
Currently when I search for misspel
Am 17.03.2009 um 14:39 schrieb Shyamsunder Reddy:
I have the same question in mind. How can I configure the same
standard request handler to handle the spell check for given query?
I mean instead of calling http://localhost:8983/solr/spellCheckCompRH?q=*:*&spellcheck.q=globl
for spelling che
Yes, I did and below is my debugQuery result.
-
-
0
47
-
10
0
on
Colo�
dismaxrequest
true
2.2
-
Colo�
Colo�
+DisjunctionMaxQuery((programJacketImage_program_s:colo |
courseCodeSeq_course_s:colo | authorLastName_product_s:colo |
era_product_s:colo
Hello all,
I have a table TEST in an Oracle DB with the following columns: URI
(varchar), CONTENT (varchar), CREATION_TIME (date).
The primary key both in the DB and Solr is URI.
Here is my data-config.xml:
The problem is that anytime I perform a delta-import
Hi
If I want to commit without optimize.
Because Ive that : > start
commit(optimize=true,waitFlush=false,waitSearcher=true)
but I don't want to optimize otherwise my replication will take every time
the full index folder.
Thanks a lot guys for ur help,
ryantxu wrote:
>
> yes. optimize also
I think if you use spellcheck.collate=true, you will still receive the
results for correct word and suggestion for wrong word.
I have name field (which is first name+last name) configured for spell
check. I have name entry: GUY SHUMAKER. I am trying to find out person
names where either 'GUY' or
Yonik Seeley wrote:
Not sure... I just took the stock solr example, and it worked fine.
I inserted "o'meara" into example/exampledocs/solr.xml
Advanced o'meara Full-Text Search
Capabilities using Lucene
the indexed everything: ./post.sh *.xml
Then queried in various ways:
q=o'meara
q=omeara
Thanks Mark, that really did the job! The speed loss in update time is more
than compensated at optimizing time!
Now I am trying to do another test... but not sure if Lucene have this
option, I am using Lucene 2.9-dev.
As I am working with 3G index and always have to optimize (as I said before,
: here is the whole file, if it helps
as i said before, i don't know much about the inner workings of
distributed search, but nothing about your config seems odd to me. it
seems like it should work fine.
a wild the shot in the dark: instead of using a requestHandler named
"standard" and urls
Hello,
I have a couple of questions relating to replication in Solr. As far as
I understand it, the replication approach for both 1.3 and 1.4 involves
having the slaves poll the master for updates to the index. We're
curious to know if it's possible to have a more dynamic/quicker way to
propa
My advanced search option allows users to search for three different fields
same time.
The fields are - first name, last name and org name. Now I have to add spell
checking feature for the fields.
When wrong spelling is entered for each of these words like first name: jahn,
last name: smath, or
Hello,
I am trying to create a basic single-core embedded Solr instance. I
figured out how to setup a single core instance and got (I believe)
all files in right places. However, I am unable to run trivial code
without exception:
SolrServer solr = new EmbeddedSolrServer(
: I have two cores in different machines which are referring to the same data
directory.
this isn't really considered a supported configuration ... both solr
instances are going to try and "own" the directory for updating, and
unless you do somethign special to ensure only one has control you
: I'm trying to think of a way to use both relevancy and date sorting in
: the same search. If documents are recent (say published within the last
: 2 years), I want to use all of the boost functions, BQ parameters, and
: normal Lucene scoring functions, but for documents older than two years,
: I
You haven't really given us a lot of information to work with...
what shows up in your logs?
what did you name the context fragment file?
where did you put the context fragment file?
where did you put the multicore directory?
sharing *exact* directory lisings and the *exact* commands you've
exe
This is a "feature" of the ShowFileRequestHandler -- it doesn't let people
browse files outside of hte conf directory.
I suppose this behavior could be made configurable (right now the only
config option is "hidden" for excluding specific files ... we could have
an option to "allow" files that
I've recently upgraded to Solr 1.3 using Lucene 2.4. One of the reasons I
upgraded was because of the nicer SearchComponent architecture that let me
add a needed feature to the default request handler. Simply put, I needed to
filter a query based on some additional parameters. So I subclassed
Query
below is my setup,
then under /home/zhangyongjiang/applications/solr, I have solr.xml as below,
under /home/zhangyongjiang/applications/solr, I created core1/, core2/, core3/,
core4 subdirectories.
hope it helps.
- Original Message
From: Chris Hostette
: Can someone point to or provide an example of how to incorporate a
: custom hitcollector when using Solr?
this is somewhat hard to do in non trivial ways because you would need to
by-pass a lot of hte Solr code that builds up DocLists and DocSets ...
if however you don't need either of thos
: How come if bq doesn't influence what matches -- that's q -- bq only
: influence
: the scores of existing matches if they also match the bq
because that's the way it was designed ... "bq" is "boost query" it's
designed to boost the scores of documents that match the "q" param.
: when I put
: bq works only with q.alt query and not with q queries. So, in your case you
: would be using qf parameter for field boosting, you will have to give both
: the fields in qf parameter i.e. both title and media.
FWIW: that statement is false. the "boost query" (bq) is added to the
query regardle
: Is not particularly helpful. I tried adding adding a bq argument to my
: search:
:
: &bq=media:DVD^2
:
: (yes, this is an index of films!) but I find when I start adding more
: and more:
:
: &bq=media:DVD^2&bq=media:BLU-RAY^1.5
:
: I find the negative results - e.g. films that are DVD but a
FWIW: there has been a lot of dicsussion around how wildcards should work
in various params that involve field names in the past: search the
archives for "glob" or "globbing" and you'll find several.
: That makes sense, since hl.fl probably can get away with calculating in the
: writer, and not
: My original assumption for the DisMax Handler was, that it will just take the
: original query string and pass it to every field in its fieldlist using the
: fields configured analyzer stack. Maybe in the end add some stuff for the
: special options and so ... and then send the query to lucene.
: below is my setup,
:
:
:
:
you provided that information before, but you still haven't answered the
most of the questions i asked you...
: You haven't really given us a lot of information to work with...
:
: what shows up in your logs?
: what did you name the context fragment file?
: I'm using StandardRequestHandler and I wanted to filter results by two fields
: in order to avoid duplicate results (in this case the documents are very
: similar, with differences in fields that are not returned in a query
: response).
...
: I'm manage to do the filtering in the client,
: I have an index which we are setting the default operator to AND.
: Am I right in saying that using the dismax handler, the default operator in
: the schema file is effectively ignored? (This is the conclusion I've made
: from testing myself)
correct.
: The issue I have with this, is that if I
: My problem was that the XMLResponseWriter is using the searcher of the
: original request to get the matching documents (in the method writeDocList
: of the class XMLWriter). Since the DocList contains id from the index of the
: second core, there were not valid in the index of the core receivin
: I am using Apache POI parser to parse a Word Doc and extract the text
: content. Then i am passing the text content to SOLR. The Word document has
: many pictures, graphs and tables. But when i am passing the content to SOLR,
: it fails. Here is the exception trace.
:
: 09:31:04,516 ERROR [STDE
I'm using dismax with the default operator set to AND, and don't use
Minimum Match (commented out in solrconfig.xml), meaning 100% of the
terms must match. Then in my application logic I use a regex that
checks if the query contains " OR ", and if it does I add &mm=1 to the
solr request to effecti
: Can I set the phrase slop value to standard request handler? I want it
: to be configurable in solrconfig.xml file.
if you mean when a user enters a query like...
+fieldA:"some phrase" +(fieldB:true fieldC:1234)
..you want to be able to control what slop value gets used for "some
phr
On Wed, Mar 18, 2009 at 12:34 AM, Vauthrin, Laurent
wrote:
> Hello,
>
>
>
> I have a couple of questions relating to replication in Solr. As far as
> I understand it, the replication approach for both 1.3 and 1.4 involves
> having the slaves poll the master for updates to the index. We're
> curi
Created SOLR-1073 in JIRA with the class file:
https://issues.apache.org/jira/browse/SOLR-1073
-- Original Message --
From: Chris Hostetter
To: solr-user@lucene.apache.org
Subject: Re: CJKAnalyzer and Chinese Text sort
Date: Mon, 16 Mar 2009 21:34:09 -0700 (PDT)
: Thanks Hoss fo
are you sure your schema.xml has a field to UPDATE docs.
to remove deleted docs you must have deletedPkQuery attribute in the root entity
On Tue, Mar 17, 2009 at 8:48 PM, Giovanni De Stefano
wrote:
> Hello all,
>
> I have a table TEST in an Oracle DB with the following columns: URI
> (varchar),
Hi,
I am new user of solr and I don't know how to index
can any one tell me setting so that I can make index and search
and also how to crawl any web site and local system using solr?
Thanks In advance.
-Sanjshra
--
View this message in context:
http://www.nabble.com/How-to-index-in-Solr--tp
On Wed, Mar 18, 2009 at 11:42 AM, Gosavi.Shyam wrote:
>
> Hi,
> I am new user of solr and I don't know how to index
> can any one tell me setting so that I can make index and search
> and also how to crawl any web site and local system using solr?
>
I think it will be best to start with the Solr
53 matches
Mail list logo