scratch is not
an option, so, is there a way of creating a 8.6.2 index from a pre-existing
6.5 index or something like that?
Thank you so much for your help.
Rafael
Hi,
You were absolutely right, there was a *string* field defined in the qf
parameter...
Using mm.autoRelax parameter did the trick
Thank you so much!
Regards
On Wed, Nov 2, 2016 at 5:15 PM, Vincenzo D'Amore wrote:
> Hi Rafael,
>
> I suggest to check all the fields present in y
Hi guys,
I came across the following issue. I configured an edixmax query parser
where *mm=100%* and when the user types in a stopword, no result is being
returned (stopwords are filtered before indexing, but, somehow, either they
are not being filtered before searching or they are taken into acco
nested documents
and get back to the old-fashioned denormalized approach ?
Thanks.
[]'s
Rafael
Absolutely!
Thanks man.
[]'s
Rafael
On Thu, Jul 2, 2015 at 12:42 PM, Alessandro Benedetti <
benedetti.ale...@gmail.com> wrote:
> That is what I was saying :)
> Hope it helps
>
> 2015-07-02 16:32 GMT+01:00 Rafael :
>
> > Just double checking:
> >
>
Just double checking:
In my ruby backend I ask for (using the given example) all suggested terms
that starts with "J." , then I (probably) add all the terms to a Set, and
then return the Set to the view. Right ?
[]'s
Rafael
On Thu, Jul 2, 2015 at 12:12 PM, Alessandro Benedetti
Thanks, Alessandro!
Well, I'm using Ruby and the r-solr as a client library. I didn't get what
you said about term id. Do I have to create this field ? Or is it a "hidden
field" utilized by solr under the hood ?
[]'s
Rafael
On Thu, Jul 2, 2015 at 6:41 AM, Alessandro
Hi, I'm building a autocomplete solution on top of Solr for an ebook
seller, but my database is complete denormalized, for example, I have this
kind of records:
*author | title | price*
-+-+-
J. R. R. Tolkien | Lor
Hi all
I have to index srt files belonged to videos so that the users can get not
only the video but also the time when their search takes place in it. For
the sake of clarity, you can find below an example of this kind of files:
1
00:00:08,580 --> 00:00:12,880
Welcome back, and in this video we'
Hum! It seems to be exactly what I need. Thanks! I'll look for it in the
docs.
Rafael Calsaverini
Data Scientist @ Catho <http://catho.com.br>
cell: +55 11 7525.6222
*
*
*8d21881718d00d997686177be1c27360493b23ea0258f5e6534437e6*
On Wed, Aug 21, 2013 at 12:08 PM, Erick Eri
thing like edismax for each group of field?
Something like:
(name^2 surname^2 nickname):(rafael calsaverini) AND (street city
state):(rua dos bobos sao paulo SP)
or whatever is the adequate syntax.
The alternative would be to build a parser for one of the fields, but if I
want to allow for a compl
Hi there,
is there a way to penalize a document's score for lacking a particular
term? It would be quite nice if I could add a negative term to the score,
which is proportional to the idf of a word that is not present in a given
field of that document.
Thanks for your time,
Rafael Calsav
Did it as suggested in the link I sent
tks a lot!
--
View this message in context:
http://lucene.472066.n3.nabble.com/highlighting-multiple-occurrences-tp4025715p4026063.html
Sent from the Solr - User mailing list archive at Nabble.com.
I saw this... since I didn't know that much velocity I'll try to understand
but I will be really glad if (obviously in case it didn't take you much
time) you point me in the direction of the changes I need to do in my
files...
best regards,
Rafael
--
View this message in
I forgot to mention that the field that I wished to have multiple occurences
shown is the field named conteudo
I am already trying to make it iterate but up to now with no succes...
--
View this message in context:
http://lucene.472066.n3.nabble.com/highlighting-multiple-occurrences-tp40257
This I didn't knew...
I have a file named buscar.vm with the important part as follows:
#foreach($doc in $response.results)
#parse("hit.vm")
#end
hit.vm as follows:
#set($docId = $doc.getFieldValue('id'))
#parse("doc.vm")
and finally doc.vm as follows:
#field('bolet
difference at all.
Do I have to change anything else? For example, something on the velocity
template???
best regards,
Rafael
--
View this message in context:
http://lucene.472066.n3.nabble.com/highlighting-multiple-occurrences-tp4025715p4025771.html
Sent from the Solr - User mailing list
ossible? I tried searching this
mailing list but I couldn't find anyone mentioning this...
best regards,
Rafael
--
View this message in context:
http://lucene.472066.n3.nabble.com/highlighting-multiple-occurrences-tp4025715.html
Sent from the Solr - User mailing list archive at Nabble.com.
"QTime":12,
"params":{
"fl":"title,img",
"indent":"true",
"start":"0",
"q":"*:*",
"wt":"json",
"rows":"10"}},
"response":{"numFound":1441958,"start":0,"maxScore":1.0,"docs":[]
}
The schema.xml has a lot of stored/indexed fields. Any hints whats wrong?
Thanks in Advance,
Rafael.
es/instances,
> or making sure to restart Solr in between configuration
> changes?
>
> Regards,
> Gora
>
--
Rafael Taboada
/*
* Phone >> 992 741 026
*/
am Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Rafael Taboada [mailto:kaliman.fore...@gmail.com]
> Sent: Tuesday, June 05, 2012 8:58 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Can't index sub-entitties in DIH
>
> Hi Gora,
>
>
iddocumento
nrodocumento
solrconfig.xml
LUCENE_36
3
db-data-config.xml
solr
Thanks for your help.
--
Rafael Taboada
/*
* Phone >> 992 741 026
*/
But I've just tried about sub-entities using mysql. And all work perfect.
It shows me that I am doing something wrong with Oracle database at work.
I attach with this mail files I used with mysql. Mapping columns is OK,
sub-entities is OK.
Thanks so much for your opinion.
Kinds regards
Rafael
Hi folks,
I've just solved using outer joins like this:
Any idea why I can't index using sub-entities?
Thanks in advance
On Mon, Jun 4, 2012 at 11:13 AM, Rafael Taboada
wrote:
> Hi folks,
>
> I'm using DIH in order to index my
y schema.xml is:
I can't index NOMBRE field. Is this because it belongs to a sub-entity?
Thanks for your help
--
Rafael Taboada
e in all your config files. Besides a typo somewhere, I'm
> not sure what else would cause this not to map.)
>
> James Dyer
> E-Commerce Systems
> Ingram Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Rafael Taboada
> [mailto:ka
onCommit
INFO: SolrDeletionPolicy.onCommit: commits:num=2
commit{dir=/home/rafael/solr/data/index,segFN=segments_1,version=1338565818575,generation=1,filenames=[segments_1]
commit{dir=/home/rafael/solr/data/index,segFN=segments_2,version=1338565818584,generation=2,filenames=[_0.tis,
_3.frq, _3.tii
ing and case in all your config files. Besides a typo somewhere, I'm
> not sure what else would cause this not to map.)
>
> James Dyer
> E-Commerce Systems
> Ingram Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Rafael Taboada [mailto:kaliman
gt; Maybe... maybe when you re-run it DIH is not replacing any documents that
> already have id's in Solr, leaving them with their old field values. Maybe
> you need to manually delete the old Solr documents and run a fresh full
> import.
>
>
> -- Jack Krupansky
>
> -
Please,
Can anyone guide me through this issue? Thanks
-- Forwarded message --
From: Rafael Taboada
Date: Thu, May 31, 2012 at 12:30 PM
Subject: Data Import Handler fields with different values in column and name
To: solr-user@lucene.apache.org
Hi folks,
I'm using Sol
Jack,
Thanks for your help.
I restarted solr when I was changing schema.xml anytime.
Any doc about this mentions it is possible to map the column with another
name value. But I can't.
Thanks again.
Rafael
On Thu, May 31, 2012 at 1:27 PM, Jack Krupansky wrote:
> Is there any chance
ag,
for example
When I use different name from column, this field is omitted. Please can
you help me with this issue?
My schema.xml is:
Thanks in advance!
--
Rafael Taboada
:
Could you tell if an error in the configuration and how to solve it.
thanks
=
Rafael Pina Coronado
Servicio de Informática.
Archivo General de la Región de Murcia
Email: rafael.p...@carm.es <mailto:rafael.p...@carm.es>
==
and
group attributes but this did not work as well.
Is it possible to do what I am trying? I am unwilling to resort to grep
outside Solr as I am pretty sure Solr is capable of doing what I want...
best regards,
Rafael Ribeiro
--
View this message in context:
http://lucene.472066.n3.nabble.com
ttp11.InternalAprOutputBuffer.doWrite(InternalAprOutputBuffer.java:552)
at org.apache.coyote.Response.doWrite(Response.java:560)
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
... 25 more
Thanks in advance,
Rafael.
--
FWP Systems GmbH
Gebrüder
27;s wrong with the given configuration and the exception not
really clear ;)
Can somebody give me a hint? Thank you in anticipation.
Best regards,
Rafael.
ECTED]
> > Sent: Monday, October 27, 2008 10:23 AM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Entity extraction?
> >
> > For the record, LingPipe is not free. It's good, but it's not free.
> >
> >
> > Otis
> > --
> &g
Solr can do a simple facet seach like FAST, but the entity extraction
demands other tecnologies. I do not know how FAST does it but at the company
I´m working on (www.cortex-intelligence.com), we use a mix of statistical
and language-specific tasks to recognize and categorize entities in the
text.
Thanks for the tip, I´ll look at it
[]s
Rossini
On 9/21/07, Mike Klaas <[EMAIL PROTECTED]> wrote:
>
> On 21-Sep-07, at 2:42 PM, Rafael Rossini wrote:
>
> > Thanks for the reply Mike. Is there any plans on doing some like
> > this? Or
> > some direction an
Thanks for the reply Mike. Is there any plans on doing some like this? Or
some direction anyone could give?
[]s
Rossini
On 9/21/07, Mike Klaas <[EMAIL PROTECTED]> wrote:
>
> On 21-Sep-07, at 8:27 AM, Rafael Rossini wrote:
>
> > Hi all,
> >
> > I´m cons
Hi all,
I´m considering on doing something like a "light-weight olap" server with
lucene/solr. To achieve that I´d have to do some math operantions on facets.
Is that possible?
For example, my documents would be a purchase row, like (id,
value, id_department, id_store, id_region ...). If I did a f
Hello all,
In one simple query on my index
"http://localhost:8983/solr/select/?q=brasilI get this:
1226511
java.lang.ArrayIndexOutOfBoundsException: 1226511
at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
at org.apache.lucene.search.TermScorer.score(TermScorer.java:61)
at org.a
I have 3 different instances of solr on jetty 6.1.13, but you need the jetty
plus.
my etc/jetty.xml looks like this
*
/webapps/solr1*
*/solr1*
/etc/webdefault.xml
solr/home
Hi, Jeff and Mike.
Would you mind telling us about the architecture of your solutions a
little bit? Mike, you said that you implemented a highly-distributed search
engine using Solr as indexing nodes. What does that mean? You guys
implemented a master, multi-slave solution for replication? Or t
44 matches
Mail list logo