List,
I've stumbled upon an issue with the deduplication mechanism. It either
deletes all documents or does nothing at all and it depends on the
overwriteDupes setting, resp. true and false.
I use a slightly modified configuration:
true
sig
true
content
o
List,
I've stumbled upon an issue with the deduplication mechanism. It either
deletes all documents or does nothing at all and it depends on the
overwriteDupes setting, resp. true and false.
I use a slightly modified configuration:
true
sig
true
content
o
It seems this e-mail did already leave the outbox yesterday. Apologies for the
spam.
On Tuesday 11 May 2010 10:13:18 Markus Jelsma wrote:
> List,
>
>
> I've stumbled upon an issue with the deduplication mechanism. It either
> deletes all documents or does nothing at all and it depends on the
>
Hello.
i searching for a way to merge my two different autocompletion in one
request.
thats what i want:
- suggestion for Product Names (EdgNGram)
- suggestion for keywords. (TermsComponent with Shingle)
both works fone, but how can i merge this ?`
is Distributet/federated search the only and
Hi ,
I had 18 Lacks records in my solr
iam posting a query which will match all these 18 lacks records .
basically this query is taking some huge time like 6-9 secs .
How can i decrease this time , because i can hit similar queries multiple
times which will reduce the performance.
Please hel
You really have to give some more details about *why* you issue such a query
and what you are measuring, search time? total response time (which would
include network transmission)?. *Of course* matching 1.8M records will take
some time.. especially if you're trying to return the entire set of
Thank you for the link Camen,
It is a great help.
Have a nice day
Regards
Eric
On Fri, May 7, 2010 at 5:13 PM, caman wrote:
>
> I would just look at SOLR source code and see how standard search handler
> and dismaxSearchHandler are implemented.
>
> Look under package 'org.apache.solr.
> <
> h
Great news, thanks :)
Marc
_
Vous voulez regarder la TV directement depuis votre PC ? C'est très simple avec
Windows 7
http://clk.atdmt.com/FRM/go/229960614/direct/01/
Jon,
Yes!!!
rsp.facet_counts.facet_fields.['var'].length to
rsp.facet_counts.facet_fields[var].length and voila.
Tripped up on a syntax error, how special. Just needed another set of
eyes - thanks. VelocityResponse duly noted, it will come in handy later.
- Tod
On 5/10/2010 4:55 PM, Jon
Hi Markus
Thank you for your answer
Here is a use case where I think it would be nice to know there is a dup before
I insert it.
Let's say I create a summary out of the document and I only index the summary
and store the document itself on a separate device (S3, Cassandra etc ...).
Then I wo
Hello.
I would write an own RH for my system. is an howto in the www ? i didnt
found anythin about it.
can i develop in the svn-checkout and test in without building an new
solr.war ? debug ?
i setting up solr in eclipse like this:
http://www.lucidimagination.com/developers/articles/set
I've stored some geo data in SOLR, and some of the coordinates are negative
numbers. I'm having trouble getting a range to work.
Using the query tool in the admin interface, I can get something like:
lon:[* TO 0]
to work to list out everything with a negative longitude, but if I try to do
somet
That did not work
-Original Message-
From: Rivulet Enterprise Search [mailto:rivulet...@gmail.com]
Sent: Sunday, May 09, 2010 4:03 AM
To: solr-user@lucene.apache.org
Subject: Re: Please Help, how to Xinclude in schema.xml
try copy to /opt/tomcat/bin
--
View this message in context:
h
If you set overwriteDupes = false the exact or near duplicate documents will
not be deleted. The signature field is set, however, so you can later query
yourself for duplicates in an external program and do whatever you want with
the duplicates.
On Tuesday 11 May 2010 15:41:33 Matthieu Labour
1. You need to set the sig field to indexed.
2. This should be added to the wiki
3. Want to make a JIRA issue? This is not very friendly behavior (when
you have the sig field set to indexed=false and overwriteDupes=true it
should likely complain)
--
- Mark
http://www.lucidimagination.com
hy,
is something like this possible ?
http://127.0.0.1:8080/solr/suggest_keyword/terms/?terms.fl=tags&terms.prefix=aut&wt=xml&shards=localhost/suggestq=aut
--
View this message in context:
http://lucene.472066.n3.nabble.com/Merge-Search-for-Suggestion-Keywords-and-Products-tp809435p810206.htm
We really need to see your schema definitions for the relevant field. For
instance,
if you're storing these as text you may just be losing the negative sign
which would
lead to all sorts of interesting "failures"..
Best
Erick
On Tue, May 11, 2010 at 9:53 AM, Christopher Gross wrote:
> I've store
The lines from the schema.xml:
I was going off of the examples from:
http://www.ibm.com/developerworks/opensource/library/j-spatial/index.html
but I wasn't able to use "tdobule" as they were.
Let me know if there is anything else you would need.
Thanks!
-- Chris
On Tue, May 1
I upload only 50 documents per call. We have about 200K documents to index,
and we index every night. Any suggestions on how to handle this? (I can
catch this exception and do a retry.)
On Mon, May 10, 2010 at 8:33 PM, Lance Norskog wrote:
> Yes, these occasionally happen with long indexing jobs
pf has the format as qf.
titlePhrase^10.0
For our project, if user query text matches the title of an article/book, we
want to surface that article as the first result. So, we are using pf field
to give additional boost to results that have the exact title match.
On Mon, May 10, 2010 at 11:21 PM
hi all,
I am very new to solr.
Now I required to patch solr (patch no 236).
I download the latest src code and patch, but unable to finde suitable way
to patch.
I have eclipse installed.
please guide me..
Hi,
How do I implement a requirement like "if category is xyz, the price should
be greater than 100 for inclusion in the result set".
In other words, the result set should contain:
- all matching documents with category value not xyz
- all matching documents with category value xyz and price > 10
I've run into this also while trying to index ~4 million documents
using CommonsHttpSolrServer.add(Iterator
docIterator) to stream all the documents. It worked great with
working with only ~2 million documents but eventually it would take ~5
tries for a full indexing job to complete without runnin
Hey,
In osx you shoud be able to patch in the same way as on liux patch -p
[level] < name_of_patch.patch. You can do this from the shell
including on the mac.
David Stuart
On 11 May 2010, at 17:15, Jonty Rhods wrote:
hi all,
I am very new to solr.
Now I required to patch solr (patch no
Thanks Mark,
I already fixed it in the meantime and quickly went on with the usual stuff, i
know, bad me =). I'll file a Jira report tomorrow and update the wiki on this
subject. I'll can also file another ticket from another current topic on this
subject; that's about a proper use-case f
hi David,
thanks for quick reply..
please give me full command. so I can patch. what is meaning of [level].
As I write I had downloaded latest src from trunk.. So please also tell
that, in terminal what will be command and from where I can run..
should I try
>patch -p[level] < name_of_patch.patch
Hi Lance,
On Mon, May 10, 2010 at 5:43 PM, Lance Norskog wrote:
>
> It thinks you are talking to a core named 'Universities'. If this does
> not help, you could post the code that opens the SolrServer and
> creates the query object.
>
>
It's looking for a core named "Universities - Embedded Sol
Hi jonty,
In then root directory of the src run
patch -p0 < name_of_patch.patch
David Stuart
On 11 May 2010, at 17:50, Jonty Rhods wrote:
hi David,
thanks for quick reply..
please give me full command. so I can patch. what is meaning of
[level].
As I write I had downloaded latest src f
In Eclipse (you *may* need to have the subclipse plugin installed), just
right-click on the project>>team>>apply patch and follow the wizard
HTH
Erick
On Tue, May 11, 2010 at 12:50 PM, Jonty Rhods wrote:
> hi David,
> thanks for quick reply..
> please give me full command. so I can patch. w
Try using "geo_distance" in the return fields.
On Thu, Apr 29, 2010 at 9:26 AM, Jean-Sebastien Vachon
wrote:
> Hi All,
>
> I am using JTeam's Spatial Plugin RC3 to perform spatial searches on my index
> and it works great. However, I can't seem to get it to return the computed
> distances.
>
>
> How do I implement a requirement like "if category is xyz,
> the price should
> be greater than 100 for inclusion in the result set".
>
> In other words, the result set should contain:
> - all matching documents with category value not xyz
> - all matching documents with category value xyz and p
I changed my schema to use the "tdouble" that the link above describes:
and I'm able to do the search correctly now.
-- Chris
On Tue, May 11, 2010 at 11:37 AM, Christopher Gross wrote:
> The lines from the schema.xml:
>
>
> required="false" />
> required="false" />
>
> I w
Hi,
Thanks for your suggestion but I received more information about this issue
from one of the JTeam's developer and he told me that
my problem was caused by the plugin not supporting sharding at this time.
In my case, I noticed that individual shards were computing the distance
through the g
thanks Ahmet.
(+category:xyz +price:[100 TO *]) (+*:* -category:xyz)
why do we have to use (+*:* -category:xyz) instead of -category:xyz?
On Tue, May 11, 2010 at 3:08 PM, Ahmet Arslan wrote:
> > How do I implement a requirement like "if category is xyz,
> > the price should
> > be greater th
Posted a few weeks ago about this but no one seemed to respond. Has anyone
seen this before? Why is this happening and more importantly how can I fix
it? Thanks in advance!
May 11, 2010 12:05:45 PM org.apache.solr.handler.dataimport.DataImporter
doDeltaImport
SEVERE: Delta Import Failed
java.lang
FYI I am using the mysql-connector-java-5.1.12-bin.jar as my JDBC driver
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-tp811053p811058.html
Sent from the Solr - User mailing list archive at Nabble.com.
This has to be done in the servlet container, not in Solr. Most
servlet containers have options to do this kind of control.
But! You have to add a different entry point for every query handler
you configure.
On Mon, May 10, 2010 at 8:47 AM, Tommy Chheng wrote:
> Is there a way to configure solr
Also, which JDBC driver is this? There are quirks with various
drivers, which should be documented on the DataImportHandler page.
On Mon, May 10, 2010 at 9:47 PM, caman wrote:
>
> This may help:
>
> batchSize : The batchsize used in jdbc connection
>
>
>
> http://wiki.apache.org/solr/DataImportHa
Hi,
I am getting a weird behavior in my Solr (1.4) index:
I have a field defined as follows:
and in all my index documents, the value of this field is "ProductBean"
(without quotes).
However, in the Solr admin console, when I type in the following query, I am
expecting all my documents to
How can one accomplish a MoreLikeThis search using boost functions?
If its not capable out of the box, can someone point me in the right
direction on what I would need to create to get this working? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-Boost-Function-tp
Hi all,
Announcing another Solr training course in Oslo, Norway June 1st-3rd.
This is the 3 day "Developing Search Applications with Solr" Lucid Imagination
course.
The training will be conducted in Norwegian.
For more information and sign-up, see www.solrtraining.com
--
Jan Høydahl, search sol
What is the debug output of the query? That would shed some light on the
issue...
Best
Erick
On Tue, May 11, 2010 at 5:48 PM, Alex Wang wrote:
>
> Hi,
>
> I am getting a weird behavior in my Solr (1.4) index:
>
> I have a field defined as follows:
>
>
>
> and in all my index documents, the val
Blargy wrote:
Posted a few weeks ago about this but no one seemed to respond. Has anyone
seen this before? Why is this happening and more importantly how can I fix
it? Thanks in advance!
May 11, 2010 12:05:45 PM org.apache.solr.handler.dataimport.DataImporter
doDeltaImport
SEVERE: Delta Import F
I'm trying to debug an issue I have surrounding a facet heavy 12M document
index I'm running, sharded across three nodes. In summary, occasionally one
of the nodes will slowdown to a snail's pace on returning search results
(10's of seconds out to minutes), causing search on the index to hang.
S
Hi,
Special characters in the text used for boost queries are not removed. For
example, bq=field1:(what is xyz?)^10 gets parsed into query field1:xyz?10
(what and is are stop words). Question mark didn't get removed -- field1
uses standard tokenizer and standard filter, so I expect it to get remov
Hi
Thanks for ur reply..
We have actually some master-data and Query-Data.
once the master-data has been loaded the solr will be indexed , and when the
query-data is loaded it will be searched against the indexed data .
The query will be formed like field1:("someValue" None) where 'None' is th
hi
I tried to patch by following command on console
patch -p0 < SOLR-236-trunk.patch
patching file
solr/src/java/org/apache/solr/handler/component/CollapseComponent.java
patching file
solr/src/test/test-files/solr/conf/solrconfig-fieldcollapse.xml
patching file
solr/src/java/org/apache/solr/searc
I tried to patch from diffrent src folder but asking for file name to
patch..
and I think some dependent patch also required..
please guide me to patch SOLR-236.patch (with dependent patch if any).
If it will require then I will again download the src code from trunk..
thanks
On Wed, May 12, 2010
Hi,
Thanks Eric..
The search parameter length is a lot to be done in GET, I am thinking of
opting for POST, is it possible to do POST request to solr. Any
configuration changes or code changes required for the same? I have many
parameters but only one is supposed to be very lengthy.
Any suggestion
49 matches
Mail list logo