RoySolr,
Not sure what language your client is written in, but this is a simple
if statement.
if (category == "TV") {
qStr = "q=*:*&facet=true&facet.field=tv_size&facet.field=resolution";
elseif (category == "Computer") {
qStr = "q=*:*&facet=true&facet.field=cpu&facet.field=gpu";
}
curl "h
You can query the replication status on the slave... When it is
complete continue...
On Thu, Jul 7, 2011 at 3:40 PM, Nolan Frausto wrote:
> We are looking for a call back to know when replication has finished after
> we force a replication using
> http://slave_host:port/solr/replication?command=f
Hi,
I am using Ubuntu 10.04 64-bit with Sun Java (build 1.6.0_24-b07) and Tomcat
(6.0.24). Sun Java and Tomcat have been installed using apt-get from the
Ubuntu/Canonical repositories. I run Tomcat with -Xmx4g and have been using
using Solr 1.4/3.0/3.1/3.2 without any problems.
However, if
in a certain time period (say christmas) I will promote a doc in "christmas"
keyword.
or based on users interest I will boost a specific category of products.
or (I am not sure how can I do this one) I will boost docs that current
user's friends (source:facebook) purchased/used/...
or based on regi
Fixed it, turns out I cant get the score if I sort by a function but if I run
a function query it'll sort by score and give me the score.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Any-way-to-get-the-value-if-sorting-by-function-tp3148864p3150216.html
Sent from the Solr
Plus, debugQuery=on would help you when using MLT after 3.1:
https://issues.apache.org/jira/browse/SOLR-860
koji
--
http://www.rondhuit.com/en/
(11/07/08 6:55), Juan Grande wrote:
Hi Elaine,
The first thing that comes to my mind is that neither the content nor the
term vectors of "text" and "
(11/07/07 18:38), Sowmya V.B. wrote:
Hi
I am trying to add UIMA module in to Solr..and began with the readme file
given here.
https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_1/solr/contrib/uima/README.txt
I would recommend you to use Solr 3.3 rather than 3.1, as we have changed
Have you looked at dismax/edismax?
I'm not clear what "rules" would be. Could
you provide some examples? Should
various fields get different boosts? Different
boosts based on part-of-speech? Boosts
based on what the value being searched is?
Best
Erick
On Thu, Jul 7, 2011 at 6:38 PM, Cengiz Han
Hi all,
I am very new to SOLR, currently trying to spike it out.
I found some resources about boosting from query string parameters but I
want to configure all this boosting "rules" for my application in the search
server (solr) level, I don't want to build and manipulate SOLR queries in my
applic
Hi Juan!
I think your problem is that in the second case the FieldQParserPlugin is
building a phrase query for "mytag myothertag". I recommend you to split the
filter in two different filters, one for each tag. If each tag is used in
many different filters, and the combination of tags is rarely re
Can someone help me with this please?
I am not able to understand from the readme.txt file provided in the
trunk...how to plugin my own annotator in to solr.
Sowmya.
On Thu, Jul 7, 2011 at 11:38 AM, Sowmya V.B. wrote:
> Hi
>
> I am trying to add UIMA module in to Solr..and began with the readm
Hi Elaine,
The first thing that comes to my mind is that neither the content nor the
term vectors of "text" and "category_text" fields are being stored. Check
the name of the parameter used to store the term vectors, which actually is
"termVectors" and not "term_vectored" (see
http://wiki.apache.o
We are looking for a call back to know when replication has finished after
we force a replication using
http://slave_host:port/solr/replication?command=fetchindex. What is the best
way to go about doing this? We are thinking of forcing the replication then
pulling the command=details page of the s
Did you ever commit?
On 07/07/2011 01:58 PM, Gabriele Kahlout wrote:
so, how about this:
Document doc = searcher.doc(i); // i get the doc
doc.removeField("wc"); // remove the field in case there's
addWc(doc, docLength); //add the new field
writer.updateDocumen
Hi Adeel,
As far as I know, this isn't possible yet, but some work is being done:
https://issues.apache.org/jira/browse/SOLR-139
https://issues.apache.org/jira/browse/SOLR-828
Regards,
*Juan*
On Thu, Jul 7, 2011 at 2:24 PM, Adeel Qureshi wrote:
> What I am trying to do is to update a docume
Hello Christopher,
Can you provide the exact query sent to Solr for the one word query and also
the two word query? The field type definition for your title field would be
useful too.
>From what I understand, Solr should be able to handle your use case. I am
guessing it is a problem with how the
It works.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-correct-query-syntax-for-date-tp3147536p3149588.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Andras,
I added metadata_ so all PDF metadata fields
> should be saved in solr as "metadata_something" fields.
>
The problem is that the "Category" metadata field from the PDF for some
> reason is not prefixed with "metadata_" and
>
solr will merge the "Category" field I have in the schema with
Hi Folks,
This is my configuration for mlt in solrconfig.xml
name,text,category_text
2
1
3
1000
50
5000
true
name,text,category_text
I also defined the three fields to have term_vectored attribute in schema.xml
When i su
Hi Guys!
Thanks for the help with my question regarding special characters in
indexes. I have another question that I hope you can help with.
Right now, some of our companies have special, non-alphanumeric
characters in them. Many of these characters get stripped out during
the indexing process
On Thu, Jul 7, 2011 at 3:35 AM, Christian wrote:
[...]
> This is great for finding all things with or without profanity (as separate
> queries), but I would like to get the value as part of a the query and let
> the consumer of the call decide what to do with the data.
>
> Is there a way to do thi
I've got an index set up where there is a field that denotes
membership in a document cluster. By using a grouped query, I can get
a result grouped by cluster membership.
Gosh, I wish I could add one more thing to the top of this pile: sort
by group size. I'd like to have the ability demand sort b
so, how about this:
Document doc = searcher.doc(i); // i get the doc
doc.removeField("wc"); // remove the field in case there's
addWc(doc, docLength); //add the new field
writer.updateDocument(new Term("id", Integer.toString(i++)), doc);
//update the doc
For some r
Cool. Glad it worked out.
On Thu, Jul 7, 2011 at 11:22 AM, serenity keningston <
serenity.kenings...@gmail.com> wrote:
> Thank you very much, I never tried to modify the config files from
> /runtime/local/conf .
>
> In Nutch-0.9, we will just modify from /conf directory. I
> appreciate your time
What I am trying to do is to update a document information while keeping
data for the fields that arent being specified in the update.
So e.g. if this is the schema
123
some title
active
if i send
123
closed
it should update the status to be closed for this document but not wipe out
title
Thank you very much, I never tried to modify the config files from
/runtime/local/conf .
In Nutch-0.9, we will just modify from /conf directory. I
appreciate your time and help.
Merci
On Thu, Jul 7, 2011 at 12:05 PM, Way Cool wrote:
> Just make sure you did change the files under
> /runtime/l
Just make sure you did change the files under
/runtime/local/conf if you are running from runtime/local.
On Thu, Jul 7, 2011 at 8:34 AM, serenity keningston <
serenity.kenings...@gmail.com> wrote:
> Hello Friends,
>
>
> I am experiencing this error message " fetcher no agents listed in '
> http.a
Lets say my sort is something like:
sort=sum(indexedField, constant). If I have a component that runs right
after the QueryComponent, is it possible to know what this value was for
each of the documents IF the field is not stored, and only indexed? I
scoured through the code and it didn't look l
Hello there, I am using DIH for importing data from a mysql db and a
directory. For this purpose I have wrote my own Transformer class in order
to modify imported values under several cases. Now we need to add document
support for our indexing server and that leaded us to use Tika in order to
impor
You can create a login and edit the wiki, so please do!
Erick
On Thu, Jul 7, 2011 at 12:44 PM, Mark juszczec wrote:
> First thanks for all the help.
>
> I think the problem was a combination of not having a unique key defined AND
> not including the commit=true parameter in the delta update.
>
>
Been There, Done That, Got the T-shirt
Erick
On Thu, Jul 7, 2011 at 12:13 PM, Benson Margulies wrote:
> I built a fresh set of snapshots myself, I carefully cleaned my
> project, and everything is happy. So this goes down in the department
> of pirate error.
>
> On Thu, Jul 7, 2011 at 8:29 AM,
First thanks for all the help.
I think the problem was a combination of not having a unique key defined AND
not including the commit=true parameter in the delta update.
Once I did those things, the delta import left me with a single (updated)
copy of the record including the changes in the source
right, you have to escape the ':' in the date, those are Lucene
query syntax characters. Try:
q=datecreation:2001-10-11T00\:00\:00Z
On Thu, Jul 7, 2011 at 10:36 AM, duddy67 wrote:
> I allready tried the format:
>
> q=datecreation:2001-10-11T00:00:00Z
>
> but I still get the same error message.
>
I'd restart Solr after changing the schema.xml. The delta import does NOT
require restart or anything else like that.
The fact that two records are displayed is not what I'd expect. But Solr
absolutely handles the replace via . So I suspect that you're
not actually doing what you expect. A lit
Hi, I'm running Solr 3.2 with edismax under Tomcat 6 via Drupal.
I'm having some problems writing a query that matches a specific field on
several words. I have implemented an AJAX search that basically takes whatever
is in a form field and attempts to match documents. I'm not having much luck
I built a fresh set of snapshots myself, I carefully cleaned my
project, and everything is happy. So this goes down in the department
of pirate error.
On Thu, Jul 7, 2011 at 8:29 AM, Erick Erickson wrote:
> Then I would guess that you have other (older) jars in your classpath
> somewhere. Does th
hi all,
Our applications requires term vectors and uses SOLR-949 solrj patch to
simplify the client layer. This patch eliminates the need to manually parse
the xml returned by the tvrh (term vector response handler)
https://issues.apache.org/jira/browse/SOLR-949
Can we get this in the head/tru
Erick
I used to, but now I find I must have commented it out in a fit of rage ;-)
This could be the whole problem.
I have verified via admin schema browser that the field is ORDER_ID and will
double check I refer to it in upper case in the appropriate places in the
Solr config scheme.
Curiously
Am 07.07.2011 16:52, schrieb Mark juszczec:
> Ok. That's really good to know because optimization of that kind will be
> important.
Optimization is only important if you had a lot of deletes or updated
docs, or if you want your segments get merged. (At least that's what I
know about it.)
>
> Wha
Hi everyone!
I would like to ask you a question about a problem I am facing with a
Solr query.
I have a field "tags" of type "textgen" and some documents with the
values "myothertag,mytag".
When I use the query:
/solr/select?sort=name_sort+asc&start=0&qf=tags&q.alt=*:*&fq={!field
q.op=AND f=tags
Ok. That's really good to know because optimization of that kind will be
important.
What of commit? Does it somehow remove the previous version of an updated
record?
On Thu, Jul 7, 2011 at 10:49 AM, Michael Kuhlmann wrote:
> Am 07.07.2011 16:14, schrieb Bob Sandiford:
> > [...] (Without the o
Let me re-state a few things to see if I've got it right:
> your schema.xml file has an entry like order_id, right?
> given this definition, any document added with an order_id that already
> exists in the
Solr index will be replaced. i.e. you should have one and only one
document with a
g
Am 07.07.2011 16:14, schrieb Bob Sandiford:
> [...] (Without the optimize, 'deleted' records still show up in query
> results...)
No, that's not true. The terms remain in the index, but the document
won't show up any more.
Optimize is only for performance (and disk space) optimization, as the
na
I allready tried the format:
q=datecreation:2001-10-11T00:00:00Z
but I still get the same error message.
I use the 1.4.1 version. Is this the reason of my pb ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-correct-query-syntax-for-date-tp3147536p3148384.html
Sent fr
Hello Friends,
I am experiencing this error message " fetcher no agents listed in '
http.agent.name' property" when I am trying to crawl with Nutch 1.3
I referred other mails regarding the same error message and tried to change
the nutch-default.xml and nutch-site.xml file details with
http.a
Bob
No, I don't. Let me look into that and post my results.
Mark
On Thu, Jul 7, 2011 at 10:14 AM, Bob Sandiford wrote:
> Hi, Mark.
>
> I haven't used DIH myself - so I'll need to leave comments on your set up
> to others who have done so.
>
> Another question - after your initial index creat
Hi, Mark.
I haven't used DIH myself - so I'll need to leave comments on your set up to
others who have done so.
Another question - after your initial index create (and after each delta), do
you run a 'commit'? Do you run an 'optimize'? (Without the optimize,
'deleted' records still show up i
Bob
Thanks very much for the reply!
I am using a unique integer called order_id as the Solr index key.
My query, deltaQuery and deltaImportQuery are below:
The test I am running is two part:
1. After I do a full import of the index, I insert a brand new record (with
a never existed
Thank you Erik for the information you gave me.
I will test the version of the index in order to know when I need to refresh
the component.
Best Regards,
gquaire
-
Jouve ITS France
--
View this message in context:
http://lucene.472066.n3.nabble.com/the-version-of-a-Lucene-index-changes-aft
Hi,
I think this is a bug but before reporting to issue tracker I
thought I will ask it here first.
So the problem is I have a PDF file which among other metadata fields
like Author, CreatedDate etc. has a metadata
field Category (I can see all metadata fields with tika-app.jar started
in
What are you using as the unique id in your Solr index? It sounds like you may
have one value as your Solr index unique id, which bears no resemblance to a
unique[1] id derived from your data...
Or - another way to put it - what is it that makes these two records in your
Solr index 'the same',
Hello all
I'm using Solr 3.2 and am confused about updating existing data in an index.
According to the DataImportHandler Wiki:
*"delta-import* : For incremental imports and change detection run the
command `http://:/solr/dataimport?command=delta-import . It
supports the same clean, commit, opti
Well, just search for "date field" in schema.xml (assuming a recent
version of Solr, you haven't told us what version you're using).
The "green" assumes you're using an editor that highlights comments
in an XML file.
But all the information you need is right there in Ahmet's e-mail. Dates
are rep
If a document is deleted, the terms are left in the index, the
document is just *marked* as deleted. So anything that
traverses the terms will pick up terms from old (deleted)
documents.
An optimize will remove the "stale" data, so I should think
that your component will have to be refreshed on an
I just checked the classpath. No strays.
You can't 'use the example solr installation' when downloading the war
artifact from the snapshot repo.
You might ask me: what happens if I just drop that war into a plain
tomcat? And I plan to try that.
On Thu, Jul 7, 2011 at 8:29 AM, Erick Erickson wro
Then I would guess that you have other (older) jars in your classpath
somewhere. Does the example Solr installation work?
Best
Erick
On Wed, Jul 6, 2011 at 10:21 PM, Benson Margulies wrote:
> Launching solr-4.0-20110705.223601-1.war, I get a class cast exception
>
> org.apache.lucene.index.Direc
Thanks but I'm still lost.
I didn't see any green colored comments.
Could you show me a concrete example of a date query ?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-correct-query-syntax-for-date-tp3147536p3147890.html
Sent from the Solr - User mailing list ar
> I have a syntax problem in my query with the SOLR date
> format.
> This is what I type:
>
> q=datecreation:2001-10-11
>
> but SOLR returns me an error message:
>
> Invalid Date String:'2001-10-11'
>
> I tried different combinations but none of them works.
> Someone could tells me what is the
Hi
I am trying to add UIMA module in to Solr..and began with the readme file
given here.
https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_1/solr/contrib/uima/README.txt
I am confused about some points in the readme file and hence the email.
2. modify your schema.xml adding the fiel
Thanks Eric for your reply,.
To answer to your question, I'm currently developing a kind of
TermsComponent which is able to merge the terms of several fields and have
the ability to reach a position in the list with a random access . To do
that, I construct a merged list of terms from the Lucene I
Thanks Ahmet.
Changing the field from String to "text_en" worked!
Sorry for all the mails. I should have understood the schema.xml properly
before asking the question. Now, I see that schema.xml has description of
this field "text_en" !
Sowmya.
On Thu, Jul 7, 2011 at 10:24 AM, Ahmet Arslan wro
What should the query look like??
I can't define 2 spellchecker in one query. I want something like this:
Search: Soccerclub(what) Manchester(where)
select/?q=socerclub
macnchester&spellcheck=true&spellcheck.dictionary=spell_what&spellcheck.dictionary=spell_where&spell_what=socerclub&spell_where
Hi,
I have a syntax problem in my query with the SOLR date format.
This is what I type:
q=datecreation:2001-10-11
but SOLR returns me an error message:
Invalid Date String:'2001-10-11'
I tried different combinations but none of them works.
Someone could tells me what is the correct syntax ?
Hello Erik,
I need the *_facets also for searching so stored must be true.
"Then, and I used *_facet similar to you, kept a list of all *_facet actual
field names and used those in all subsequent search requests. "
Is this not bad for performance? I only need a few facets, not all.(only the
face
> Thanks for the mail.
> But, just a clarification: changing the field type in
> schema means I have to
> reindex to check if this works, right?
Yes. restart servlet container and re-index is required.
Hello Ahmet
Thanks for the mail.
But, just a clarification: changing the field type in schema means I have to
reindex to check if this works, right?
Sowmya
On Thu, Jul 7, 2011 at 10:13 AM, Ahmet Arslan wrote:
> Hello,
>
> Your text and title fields are marked as string which is not tokenized.
> I'm using Solr 3.3 for searching in different languages,
> one of them is Spanish. The ASCIIFoldingFilterFactory works
> fine, but if word begins with a letter accented, like
> "ágora" or "ínclito", it can't find anything. I have to
> search word without accent in order to find some result. For
>
Hello,
Your text and title fields are marked as string which is not tokenized.
marking them indexed="true" will make them searchable but they will be indeed
verbatim.
Try using text_en for example.
--- On Thu, 7/7/11, Sowmya V.B. wrote:
From: Sowmya V.B.
Subject: Re: indexing but not
You just need to allocate more heap to your JVM.
BTW are you doing any complex search while indexing is in progress, like
getting large set of documents.
Thanx
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/OOM-at-solr-master-node-while-updating-document-tp3140018p31
Hi
I'm using Solr 3.3 for searching in different languages, one of them is
Spanish. The ASCIIFoldingFilterFactory works fine, but if word begins with a
letter accented, like "ágora" or "ínclito", it can't find anything. I have to
search word without accent in order to find some result. For i
70 matches
Mail list logo