ething else to
> trigger it.
>
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 15 August 2016 at 23:54, Luis Sepúlveda wrote:
> > Thanks for the promp reply.
> >
> > h.enabled=true is a typo. It
uot; where "h" does not match the table names in the FROM
> statement. Perhaps, that's the problem?
>
> Regards,
>Alex.
>
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 15 August 2016 a
Hello,
Solr is trying to process non-existing child/nested entities. By
non-existing I mean that they exist in DB but should not be at Solr side
because they don't match the conditions in the query I use to fetch them.
I have the below solr data configuration. The relationship between tables
is c
Hi Rajesh,
Have you taked a look on Query Re-Ranking? The idea is a little different of
what you want but i think it should work, essentially you use your normal
search query and then re-rank the top-n documents using a sencod query, this
second query could use the position field to influence y
CloudSolrClient will route all the documents
to the correct leader, leading to better performance. That class
scales nearly linearly in terms of indexing throughput with the
number of shards
FWIW,
Erick
On Sat, Jun 27, 2015 at 2:32 AM, Jorge Luis Betancourt González
wrote:
> Thanks for
route all the documents
to the correct leader, leading to better performance. That class
scales nearly linearly in terms of indexing throughput with the
number of shards
FWIW,
Erick
On Sat, Jun 27, 2015 at 2:32 AM, Jorge Luis Betancourt González
wrote:
> Thanks for the prompt reply Shawn
some
methods
On 6/26/2015 2:27 PM, Jorge Luis Betancourt González wrote:
> I'm trying to use the ConcurrentUpdateSolrClient class, that has some methods
> that accept and aditional parameter to indicate the collection, some of this
> methods are add(String collection, SolrInputDoc
Hi all,
I'm trying to use the ConcurrentUpdateSolrClient class, that has some methods
that accept and aditional parameter to indicate the collection, some of this
methods are add(String collection, SolrInputDocument doc), request(SolrRequest,
String collection). With HttpSolrClient this works f
scale horizontally and startup new Tomcat + Solrs from 4 to N nodes.
Best,
- Luis Cappa
2015-05-19 15:57 GMT+02:00 Michael Della Bitta :
> Are you sure the requests are getting queued because the LB is detecting
> that Solr won't handle them?
>
> The reason why I'm asking
the
doc/field level.
Is this a desired behaviour?
Regards,
- Original Message -
From: "Jorge Luis Betancourt González"
To: solr-user@lucene.apache.org
Sent: Thursday, May 14, 2015 11:49:18 PM
Subject: Re: [MASSMAIL]Re: High fieldNorm values causing really odd results
Regarding t
Does a boost field in Solr has any use on the core calculation? For what I can
see in [1] If a boost attribute is used in the doc/field level it my be encoded
in the norm field and then used to boost the specific match in the doc/field.
But I've a schema.xml with a boost field defined and using
Regarding the experiment, sorry If I explained myself in the wrong way, the
indexed document doesn't have 119669 terms have a lot less terms (less than a
1000 terms, I don't have the exact number here now), instead 119669 is the
number of distinct terms reported by luke (Top-terms total in the a
Hi Hoss,
First of all, thank you for your reply.
Sorry for leaving the Solr version out in my previous email, I'm using Solr
4.10.3 running on Centos7, with the following JRE: Oracle Corporation OpenJDK
64-Bit Server VM (1.7.0_75 24.75-b04)
This are the relevant portions of my schema.xml
, 2015 at 12:49 PM, Luis Cappa Banda
> wrote:
> > If you don' t mark as stored a field indexed and 'facetable', I was
> > expecting to not be able to return their values, so faceting has no
> sense.
>
> Faceting does not use or retrieve stored field values.
nces between them are:
- Regular expression: i18n* VS *_facet
- Multivalued: *_facet are multivalued.
Regards,
- Luis Cappa
2015-05-14 18:32 GMT+02:00 Yonik Seeley :
> On Thu, May 14, 2015 at 10:47 AM, Luis Cappa Banda
> wrote:
> > Hi Yonik,
> >
> > Yes, they are the tar
Hi everyone:
>From the last couple of week I'm noting some really odd results in my Solr
>server, searching for the root cause the one thing I can point out is a very
>high value of the fieldNorm parameter in the score calculation, an snippet of
>the debug info:
{
"match":true,
"value":4
get are dynamic,
indexed and stored values. The only difference is that *_target one is
multivalued. Does it have some sense?
Regards
- Luis Cappa
2015-05-14 16:42 GMT+02:00 Yonik Seeley :
> Are the _facet fields the target of a copyField in the schema?
> Realtime get either gets the values
Ehem, *_target ---> *_facet.
2015-05-14 16:47 GMT+02:00 Luis Cappa Banda :
> Hi Yonik,
>
> Yes, they are the target from copyFields in the schema.xml. This *_target
> fields are suposed to be used in some specific searchable (thus, tokenized)
> fields that in the future ar
bug (because maybe is the expected
behavior, but after some years using Solr I think it is not) I can create
the JIRA issue and debug it more deeply to apply a patch with the aim to
help.
Regards,
--
- Luis Cappa
Hi,
I have a Solr instance using the clustering component (with the Lingo
algorithm) working perfectly. However when I get back the cluster results
only the ID's of these come back with it. What is the easiest way to
retrieve full documents instead? Should I parse these IDs into a new query
to Sol
So bottom line you're trying to get the count on distinct values on the
loginName field? At least based on your query "*:*", if this is what you're
after checkout the Stats component, specially the calcDistinct parameter,
although if you expect a really high cardinality in the field this could b
For a project I'm working on, what we do is store the user's query in a
separated core that we also use to provide an autocomplete query functionality,
so far, the frontend app is responsible of sending the query to Solr, meaning:
1. execute the query against our search core and 2. send an updat
sults.
We excluded 'special N' with -id:(1 2 3 ... N) type query. all done on client
side.
Ahmet
On Tuesday, January 27, 2015 8:28 PM, Jorge Luis Betancourt González
wrote:
Hi all,
Recently I got an interesting use case that I'm not sure how to implement, the
idea is that t
Hi all,
Recently I got an interesting use case that I'm not sure how to implement, the
idea is that the client wants a fixed number of documents, let's call it N, to
appear in the top of the results. Let me explain a little we're working with
web documents so the idea is too promote the documen
Perhaps could you use a DocTransformer to convert the unix time field into any
representation you want? You'll need to write a custom DocTransformer but this
is no complex task.
Regards,
- Original Message -
From: "Ahmed Adel"
To: solr-user@lucene.apache.org
Sent: Monday, January 26, 2
Hi Dan:
Agreed, this question is more Nutch related than Solr ;)
Nutch doesn't send any data into /update/extract request handler, all the text
and metadata extraction happens in Nutch side rather than relying in the
ExtractRequestHandler provided by Solr. Underneath Nutch use Tika the same
te
Hi all,
Recently I got an interesting use case that I'm not sure how to implement, the
idea is that the client wants a fixed number of documents, let's call it N, to
appear in the top of the results. Let me explain a little we're working with
web documents so the idea is too promote the documen
, January 23, 2015 8:26:48 AM
Subject: RE: Avoiding wildcard queries using edismax query parser
Here's a Jira for this: https://issues.apache.org/jira/browse/SOLR-3031
I've attached a patch there that might be useful for you.
-Michael
-Original Message-
From: Jorge Luis Betanco
uery parser
The dismax query parser does not support wildcards. It is designed to be
simpler.
-- Jack Krupansky
On Thu, Jan 22, 2015 at 5:57 PM, Jorge Luis Betancourt González <
jlbetanco...@uci.cu> wrote:
> I was also suspecting something like that, the odd thing was that the with
> the
;
To: "solr-user"
Sent: Thursday, January 22, 2015 4:46:08 PM
Subject: Re: Avoiding wildcard queries using edismax query parser
I suspect the special characters get caught before the analyzer chains.
But what about pre-pending a custom search components?
Regards,
Alex.
Sign up
Hello all,
Currently we are using edismax query parser in an internal application, we've
detected that some wildcard queries including "*" are causing some performance
issues and for this particular case we're not interested in allowing any user
to request all the indexed documents.
This coul
I think this sounds like grouping results by field?
You should enable groups by adding &group=true&group.field=YOURFIELD to test
this feature.
For each unique value of the field specified in group.field, Solr returns a
docList with the *top scoring document*. In the docList you can see the total
This is the full output? try verbose I had an issue with the same library in my
case I was downloading from a local nexus mirror but the problem was with a bad
checksum, I figured this out with the ant -verbose compile command. Disabling
the checksum check for my local nexus and got it working j
It wouldn’t be easy if in the site you’ll ensure that only terms are submitted
to the actual search? In app I worked some time ago the default behavior of the
Javascript component used for autocompletion was to first autocomplete the term
in the input and then submit the query against the backen
The whole idea behind Solr is to solve the problem that you just explain, in
particular what you need is to define the title field as a solr.TextField and
then define a tokenizer. The tokenizer essentially will transform the initial
text into tokens. Solr has several tokenizers, each which its s
How would you measure which snippet is the best?
On Nov 9, 2014, at 1:59 PM, SolrUser1543 wrote:
> Lets say that for some query there are several results , with several hits
> for each one , which shown in hightligth section of the response.
>
> Is it possible to select only one best hit for e
I remember a talk by CareerBuilder whe they wrote an API using the approach
explained by Alexandre and they got really good results.
- Original Message -
From: "Anurag Sharma"
To: solr-user@lucene.apache.org
Sent: Saturday, November 8, 2014 7:58:48 AM
Subject: Re: How to dynamically crea
Hi all:
>From the description of the StandardTokenizer, it should Recognizes Internet
>domain names and email addresses and preserves them as a single token, which
>works great, but I've detected that in cases like this:
socks25.domain.com it outputs 2 tokens: socks25 | domain.com
if the URL d
When you fire a query against Solr with the wt=csv the response coming from
Solr is *already* in CSV, the CSVResponseWriter is responsible for translating
SolrDocument instances into a CSV on the server side, son I don’t see any
reason on using it by your self, Solr already do the heavy lifting
Are you going to use the values stored on Solr to display the data in HTML? For
searching purposes I suggest to delete all the HTML tags, and store the plain
text, for this you could use the HTMLStripCharFilterFactory char filter, this
will "clean" your content and only pass the actual text whic
Although this looks like a nice & simple addition to the web interface.
- Original Message -
From: "Ramzi Alqrainy"
To: solr-user@lucene.apache.org
Sent: Wednesday, October 29, 2014 3:18:26 PM
Subject: Re: Clear Solr Admin Interface Logging page's logs
Yes sure, if you use jetty containe
I
want/need?
-Ursprüngliche Nachricht-
Von: Jorge Luis Betancourt Gonzalez [mailto:jlbetanco...@uci.cu]
Gesendet: Freitag, 26. September 2014 19:15
An: solr-user@lucene.apache.org
Betreff: Re: (auto)suggestions, but ony from a "filtered" set of documents
Perhaps instead
I believe some of this statistics function that you're trying to use are
precent in facets.
- Original Message -
From: "nabil Kouici"
To: solr-user@lucene.apache.org
Sent: Thursday, October 23, 2014 5:57:27 AM
Subject: Analytics component
Hi All,
I'm trying to use Solr to do some ana
OLR-6357
>
> I can't think of any other queries at the moment. You might consider using
> the above query (which should work as a normal select query) to get the
> IDs, then delete them in a separate query.
>
>
> On 10 October 2014 07:31, Luis Festas Matos wrote:
>
Given the following Solr data:
1008rs1cz0icl2pk
2014-10-07T14:18:29.784Z
h60fmtybz0i7sx87
1481314421768716288
u42xyz1cz0i7sx87
h60fmtybz0i7sx87
1481314421768716288
u42xyz1cz0i7sx87
h60fmtybz0i7sx87
1481314421448900608
I would like to know how to *DELETE docum
If you’re talking about a generic web crawl you could use something like Nutch
[1] keep in mind that his a full web crawler and it does a pretty good job.
I’ve been using it for over more than 2 years now and I’m very happy, although
I don’t crawl just a couple of sites but a more wide spectrum
I see you’re defining a default value for “rows” this could be overridden on
the request, and requesting a lot of documents from solr can stress out your
server/cluster, of course if the client in question has that many documents. if
this is a fixed value and the clients can’t request more docum
Don’t worry, the way Hoss explained its indeed the way I’ve know that works,
but the example provided in the book pick my curiosity and hence the question
in this thread.
Regards,
On Sep 30, 2014, at 5:59 PM, Timothy Potter wrote:
> Indeed - Hoss is correct ... it's a problem with the example
Hi,
Does Lucene support syllabification of words out of the box? If so is there
support for brazilian portuguese? I'm trying to setup a readability score
for short text descriptions and this would be really helpful.
thanks,
--
Luis Carlos Guerrero
about.me/luis.guerrero
Perhaps instead of the suggester component you could use the EdgeNGramFilter
and provide partial matches so you will me able to configure a custom request
handler that will “suggest” terms of phrases for you. I’m using this approach
to provide queries suggestions, of course I’m indexing the quer
Krupansky wrote:
> I am not aware of any such feature! That doesn't mean it doesn't exist, but I
> don't recall seeing it in the Solr source code.
>
> -- Jack Krupansky
>
> -Original Message- From: Jorge Luis Betancourt Gonzalez
> Sent: Wednesday, Septem
I’ve done something similar to this using the the EdgeNGram not the
spellchecker component, I don’t know if this is along with your requirements:
The relevant portion of my fieldType config:
class="solr.SpellCheckComponent">
>
>
. See
> solrconfig.xml:
>
>
>
>
>explicit
>10
>text
>
> ...
>
> -- Jack Krupansky
>
> -Original Message- From: Jorge Luis Betancourt Gonzalez
> Sent: Tuesday, September 23, 2014 11:02 AM
> To: solr-user@lucene.apache.org
> Subject: Ch
Hi:
I’m trying to change the default configuration for the query component of a
SearchHandler, basically I want to set a default value to the rows parameters
and that this value be shared by all my SearchHandlers, as stated on the
solrconfig.xml comments, this could be accomplished redeclaring
Basically you could create a bunch of dynamic fields (according to your needs)
so basically creating a dynamic field for each type of data (and several
combinations) and then you can create a small wrapper around Solrj that will
wrap the patterns defined on your schema.xml in a more understandab
Which crawler are you using?
On Sep 18, 2014, at 10:14 AM, keeblerh wrote:
> eShard wrote
>> Good afternoon,
>> I'm using solr 4.0 Final
>> I need movies "hidden" in zip files that need to be excluded from the
>> index.
>> I can't filter movies on the crawler because then I would have to exclude
n the same field
not that common?
On Tue, Sep 16, 2014 at 11:06 AM, Luis Carlos Guerrero <
lcguerreroc...@gmail.com> wrote:
> Thanks for the response, I've been working on solving some of the most
> evident issues and I also added your garbage collector parameters. First of
> al
p the GC work better for
> you (which is not to say there isn't a leak somewhere):
>
> -XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40
>
> This should lead to a nice up-and-down GC profile over time.
>
> On Thu, Sep 11, 2014 at 10:52 AM, Luis Carlos Guerre
easons. Was there some issue reported
related to elevated memory consumption by the field cache?
any help would be greatly appreciated.
regards,
--
Luis Carlos Guerrero
about.me/luis.guerrero
What are you developing a custom search component? update processor? a
different class for one of the zillion moving parts of Solr?
If you have access to a SolrCore instance you could use it to get access of,
essentially using the SolrCore instance specific to the current core will cause
the l
In one of the talks by Trey Grainger (author of Solr in Action) it touches how
on CareerBuilder are dealing with multilingual with payloads, its a little more
of work but I think it would payoff.
On Sep 8, 2014, at 7:58 AM, Jack Krupansky wrote:
> You also need to take a stance as to whether
Perhaps what you’re trying to do could be addressed by using the
EdgeNGramFilterFactory filter? For query suggestions I’m using a very similar
approach, this is an extract of the configuration I’m using:
Basically this allows you to get partial matches from any part of the string,
let’s s
Hi all:
We have a small installation of Solr 3.6 in our hands, right now we have 3
physical servers (1 master and 2 slaves) the ingestion process it’s done in the
master which replicates by solr internal mechanism into the slaves, which
handles all the queries. We are trying to update to Solr 4
query string. So i better suggest you that the if the website has
> the appropriate and good data it should come on first page, so its better
> to come on first page rather than finding the position.
>
> With Regards
> Aman Tandon
>
>
> On Tue, Jun 24, 2014 at 10:35 AM,
; way to do it. But you only need to fetch the URL field. You can ignore
> everything else.
>
> wunder
>
> On Jun 23, 2014, at 9:32 PM, Jorge Luis Betancourt Gonzalez
> wrote:
>
>> Basically given a few search terms (query) the idea is to know given one or
>>
Aman Tandon
>
>
> On Tue, Jun 24, 2014 at 4:30 AM, Jorge Luis Betancourt Gonzalez <
> jlbetanco...@uci.cu> wrote:
>
>> I’m using Solr for an analytic use case, one of the requirements is
>> basically given a search query get the position of the first hit. I’m
>
I’m using Solr for an analytic use case, one of the requirements is basically
given a search query get the position of the first hit. I’m indexing web pages,
so given a search criteria the client want’s to know the position (first
occurrence) of his webpage in the result set (if it appears at al
I’ve certainly go for the 2nd option. Depending of what you need you won’t need
to modify Solr itself but extend it using different plugins for what you need.
You’ll need to write different components depending on your specific
requirements. I definitely recommend the talks from Trey Grainger, f
Is there some work around in Solr ecosystem to get something similar to the
percolator feature offered by elastic search?
Greetings!VII Escuela Internacional de Verano en la UCI del 30 de junio al 11
de julio de 2014. Ver www.uci.cu
One good thing about kelvin it's more a programmatic task, so you could execute
the scripts after a few changes/deployment and get a general idea if the new
changes has impacted into the search experience; yeah sure the changing catalog
it's still a problem but I kind of like to be able to execu
- A possible hack that I never followed through -
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201401.mbox/%3CCANGii8eaSouePGxa7JfvOBhrnJUL++Ct4rQha2pxMefvaWhH=g...@mail.gmail.com%3E
Maybe one of those will help you? If they do, make sure to report back!
-Luis
On Tue, Apr 1
Hi Salman,
I was interested in something similar, take a look at the following thread:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201401.mbox/%3CCADSoL-i04aYrsOo2%3DGcaFqsQ3mViF%2Bhn24ArDtT%3D7kpALtVHzA%40mail.gmail.com%3E#archives
I never followed through, however.
-Luis
On
le?
Thanks in advance!
Best,
2014-03-12 12:10 GMT+01:00 Luis Cappa Banda :
> I've seen that StandardDirectoryReader appears in the commit logs. Maybe
> this DirectoryReader type is caching somehow the old segments in SolrB and
> SolrC even if they have been commited previosly. If that
ectoryReader or
FSDirectoyReader) that always read the current segments when a commit
happens?
2014-03-12 11:35 GMT+01:00 Luis Cappa Banda :
> Hey guys,
>
> I've doing some tests sharing the same index between three Solr servers:
>
> *SolrA*: is allowed to both read and index. The
t. In other words, SolrA shows newer segments and SolrB/SolrC appears
to see just the old ones.
Is that normal? Any idea or suggestion to solve this?
Thank you in advance, :-)
Best regards,
--
- Luis Cappa
e? Do you have any thoughts on how we might better accomplish this
functionality?
Thanks!
On Wed, Feb 5, 2014 at 1:42 PM, Yonik Seeley wrote:
> On Wed, Feb 5, 2014 at 1:04 PM, Luis Lebolo wrote:
> > Update: It seems I get the bad behavior (no documents returned) when the
> > length
cipal worry was about optimizing at much as possible search speed
thanks to optimizing, mergeFactor tunning, caches setup, etc.
Thanks a lot!
2014-02-06 Toke Eskildsen :
> On Thu, 2014-02-06 at 10:22 +0100, Luis Cappa Banda wrote:
> > I knew some performance tips to improve search and I c
ool to prevent weird production
situations.
Best,
- Luis Cappa
2014-02-05 Chris Hostetter :
>
> : I've got an scenario where I index very frequently on master servers and
> : replicate to slave servers with one minute polling. Master indexes are
> : growing fast and I would like
Update: It seems I get the bad behavior (no documents returned) when the
length of a value in the StrField is greater than or equal to 32,767
(2^15). Is this some type of bit overflow somewhere?
On Wed, Feb 5, 2014 at 12:32 PM, Luis Lebolo wrote:
> Hi All,
>
> It seems that I can
trings in StrField that I
can query against?
Thanks,
Luis
Will slave servers "loose" index identifiers that allow
them to replicate delta documents from master after optimizing them? Will
the next replication update slaves indexes overriding the optimized index?
Thank you very much in advance.
Regards,
--
- Luis Cappa
In the book Apache Solr Beginner’s Guide there is a section dedicated to write
new Solr plugins, perhaps it would be a good place to start, also in the wiki
there is a page about this, but the it’s a light introduction. I’ve found that
a very good starting point it’s just browse throw the code o
Previously in the list a spreadsheet has been mentioned, taking into account
that you already have documents in an index you could extract the needed
information from your index and feed it into the spreadsheet and it probably
will give you a rough approximated of the hardware you’ll bee needing
I’ve some experience using Solarium and have been great so far. In particular
we use the NelmioSolariumBundle to integrate with Symfony2.
Greetings!
On Jan 28, 2014, at 1:54 PM, Felipe Dantas de Souza Paiva
wrote:
> Hi Folks,
>
> I would like to know what is the best way to integrate PHP an
Q1: Nutch doesn’t only handle the parse of HTML files, it also use hadoop to
achieve large-scale crawling using multiple nodes, it fetch the content of the
HTML file, and yes it also parse its content.
Q2: In our case we use sold to crawl some website, store the content in one
“main” solr core.
I believe that you are looking for something similar to the percolator feature
present in elasticsearch. I remember something about a solar implementation
being discussed here some time ago. Anyone knows if there have been any
progress in this area?
On Jan 27, 2014, at 8:18 AM, Furkan KAMACI w
If I’m not remembering incorrectly Trey Grainger in one of his talks explained
a few techniques that could be of use. If the equivalency is not dynamically
you could just use synonyms. Otherwise some kind of offline processing should
be used to compute the similarity between your queries (given
I would love to see some proxy-like application implemented in go (partly for
my desire of having time to check out go).
- Original Message -
From: "Shawn Heisey"
To: solr-user@lucene.apache.org
Sent: Wednesday, January 22, 2014 10:38:34 AM
Subject: Re: Solr middle-ware?
On 1/22/2014 12
",
- q: "tagsValues:"sucks"",
- facet.limit: "-1",
- facet.field: "tagsValues",
- wt: "json"
}
Any idea of what's happening here? I'm confused, :-/
Regards,
--
- Luis Cappa
In a custom application we have, we use a separated core (under Solr 3.6.1) to
store the queries used by the users and then provide the autocomplete feauture.
In our case we need to filter some phrases, that we don't need to be suggested
to the users. I build a custom UpdateRequestProcessor to i
processing
the old request, correct?
Is there any way to cancel that first request?
Thanks,
Luis
Happy new year!
I’ve developed some custom update request processors to accomplish some custom
logic needed in some user cases. I’m trying to write test for this processor,
but I’d like to test in a very similar way of how the built in processors are
tested in the solr source code. Is there any
With custom UpdateRequestProcessor this would be doable, but depending on when
this event will be listened, perhaps Otis is right.
- Original Message -
From: "Utkarsh Sengar"
To: solr-user@lucene.apache.org
Sent: Friday, December 27, 2013 7:29:40 PM
Subject: Re: Trigger event on change o
Right now we have a custom use case: Basically we are using a separated solr
core to store/suggest queries made by our users in our frontend app (writtern y
Symfony2+Solarium). So basically each time a user hits our search box the query
goes into this particular core. The thing is that there are
Currently I've the following Update Request Processor chain to prevent indexing
very similar text items into a core dedicated to store queries that our users
put into the web interface of our system.
true
false
signature
textsuggest,textng
org.apache.solr.upd
Is it possible to export the doc into markdown?
- Mensaje original -
De: "Chris Hostetter"
Para: solr-user@lucene.apache.org
Enviados: Lunes, 9 de Diciembre 2013 14:00:34
Asunto: Re: ANNOUNCE: Apache Solr Reference Guide 4.6
: Can we please give some thought to producing these manuals
Hi:
I'm using solr 3.6 with dismax query parser, I've found that docs that doesn't
has all the query terms get ranked above other that contains all the terms in
the search query. Using debugQuery I could see that the most part of the score
in this cases come from the coord(q,d) factor. Is there
+1 on this.
- Mensaje original -
De: "Otis Gospodnetic"
Para: solr-user@lucene.apache.org
Enviados: Viernes, 6 de Diciembre 2013 9:35:25
Asunto: Re: Introducing Luwak for high-performance stored Lucene queries
Hi Charlie,
Very nice - thanks!
I'd love to see a side-by-side comparison wi
I think that one experience in this area could by provided by Tray Grainger,
author of Solr in Action, I believe that some of his work on careerbuilder
involve the creation of something (somehow) similar to what you're trying to
accomplish. I must say that I'm also interested in this topic, but
Perhaps what you want is a transparent proxy? You could use nginx, squid,
varnish, etc. W've been evaluating varnish as a posibility to run in front of
our solr server and take advantage of the HTTP caching that varnish does so
well.
Greetings!
- Mensaje original -
De: "Markus Jelsma"
1 - 100 of 323 matches
Mail list logo