k KGaA, Darmstadt, Germany and any of its
> subsidiaries do not accept liability for any omissions or errors in this
> message which may arise as a result of E-Mail-transmission or for damages
> resulting from any unauthorized changes of the content of this message and
> any attach
Hi, problem should be caused by missing surrounding curly brackets.
That is, your query is
json.facet=prod:{type:terms,field:product,mincount:1,limit:8}
instead it should be
json.facet=*{*prod:{type:terms,field:product,mincount:1,limit:8}*}*
that causes the wrong interpretation of the "json/fa
p dump and MemoryAnalyzer.
Regards
Bernd
Am 30.09.19 um 09:44 schrieb Andrea Gazzarini:
mmm, ok for the core but are you sure things in this case are working
per-segment? I would expect a FilterFactory instance per index,
initialized at schema loading time.
On 30/09/2019 09:04, Bernd Fehling wrote:
A
with 3 index segments, sums up to 6 times
the 2 SynonymMaps. Results in 12 times SynonymMaps.
Regards
Bernd
Am 30.09.19 um 08:41 schrieb Andrea Gazzarini:
Hi,
looking at the stateful nature of SynonymGraphFilter/FilterFactory
classes,
the answer should be 2 times (one time per type instance
On 30/09/2019 09:04, Bernd Fehling wrote:
And I think this is per core per index segment.
2 cores per instance, each core with 3 index segments, sums up to 6 times
the 2 SynonymMaps. Results in 12 times SynonymMaps.
Regards
Bernd
Am 30.09.19 um 08:41 schrieb Andrea Gazzarini:
Hi
type.
Best,
Andrea
--
Andrea Gazzarini
*Search Consultant, R&D Software Engineer*
www.sease.io
email: a.gazzar...@sease.io
cell: +39 349 513 86 25
On 29/09/2019 23:49, Dominique Bejean wrote:
Hi,
My concern is about memory used by synonym filter, especially if synonyms
resources files
the doc ID (presumably the ),
> > using get-by-id instead of a standard query will be very efficient. I can
> > imagine
> > under very heavy load this might introduce too much overhead, but it’s
> > where I’d start.
> >
> > Best,
> > Erick
> >
> >
ent component for collection B)
Andrea
On Thu, 29 Aug 2019, 19:46 Arnold Bronley, wrote:
> I can't use CloudSolrClient because I need to intercept the incoming
> indexing request and then add one more field to it. All this happens on
> Solr side and not client side.
>
> On Thu,
Hi Arnold,
why don't you use solrj (in this case a CloudSolrClient) instead of dealing
with such low-level details? The actual location of the document you are
looking for would be completely abstracted.
Best,
Andrea
On Thu, 29 Aug 2019, 18:50 Arnold Bronley, wrote:
> So, here is the problem th
I bet this is the problem:
java.nio.file.NoSuchFileException: /solr-m/server/solr/sitecore_c
Do you have any idea about why Solr us not finding that datafile?
Andrea
On Wed, 21 Aug 2019, 14:41 Atita Arora, wrote:
> I think it would be useful to provide additional details like solr version,
>
I'm not really sure I got what you want to achieve, but after reading
" I am trying to manipulate the data that I already have in the response,
...
I can see how I can access the response and construct a new one."
an option to consider could be a query response writer [1].
Cheers,
Andrea
[1] ht
Hi Mark, you are using a "range facet" which is a "query-shape" feature,
it doesn't have any constraint on the results (i.e. it doesn't filter at
all).
You need to add a filter query [1] with a date range clause (e.g.
fq=field:[ TO or *>]).
Best,
Andrea
[1]
https://lucene.apache.org/solr/gui
Good morning guys, I have a questions about Solrj and JSON facets.
I'm using Solr 7.7.1 and I'm sending a request like this:
json.facet={x:'max(iterationTimestamp)'}
where "iterationTimestamp" is a solr.DatePointField. The JSON response
correctly includes what I'm expecting:
"facets": {
But there is a thing that
I don’t understand, we have copied the DB and the contenstore the
numDocs for the two environments should be the same no?
Could you also explain me the meaning of the maxDocs value pleases?
Thanks
Matthieu
*From:*Andrea Gazzarini [mailto:a.gazzar...@sease.io]
*Sent:
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same
numbers (numdocs / maxdocs)? Any meaningful message (error or not) in
log files?
Andrea
On 08/02/2019 14:19, Mathieu Menard wrote:
Hello,
I would like to have your point of view about an observation we have
Hi Jay, the text analysis always operates on the indexed content. The
stored content of a filed is left untouched unless you do something
before it gets indexed (e.g. on client side or by an
UpdateRequestProcessor).
Cheers,
Andrea
On 14/01/2019 08:46, Jay Potharaju wrote:
Hi,
I have a copy f
Hi,
What Alexander said is right, but if in your scenario you would still go
for that, you could try this [1], that should fit your need.
Best,
Andrea
[1] https://github.com/SeaseLtd/composite-request-handler
On Mon, 3 Dec 2018, 13:26 Alexandre Rafalovitch You should not be exposing Solr direc
Hi, you may be interested in the mm [1] parameter, which in this case
shout be set to 100%. However, if your requirements are more complicated
than this, an mm=100% could have some unwanted side-effects because it's
very "rigid".
Best,
Andrea
[1]
https://lucene.apache.org/solr/guide/6_6/the-
Oops, sorry...too much rush in reading, I didn't read the second part.
Please forget my answer ;)
Andrea
On 21/09/18 15:52, Andrea Gazzarini wrote:
Hi Sergio,
assuming that you don't want to disable tokenisation (otherwise you
can define the indexed field as a string and search it
Hi Sergio,
assuming that you don't want to disable tokenisation (otherwise you can
define the indexed field as a string and search it as a whole),
in "Relevant Search" the authors describe a cool approach using the so
called "Sentinel Tokens", which are symbolic tokens representing the
beginnin
Hi,
as far as I know, this is not possible. Solr is document-oriented,
meaning with that its datamodel works using a "document" level of
granularity. If you index A, you get A.
I see a couple of chances (maybe someone else could add other options):
* index exactly what you need: in your case
Hi Rekna,
I think nobody can seriously answer to your questions. The only
"serious" answer to your questions, which doesn't help you, is "it
depends"; specifically, it depends on what is / are your goal / goals,
context and so on.
Is not possible, at least in my opinion, to provide such answe
ws fetched" = 0, then the query is not working for some reason:
can you check if some clause in your SQL include < or > ? They need to
be escaped (< >)
Andrea
On 10/09/2018 17:22, Monique Monteiro wrote:
This is shown in the section "Raw Debug-Response".
On Mon, Sep 10
uot;Time Elapsed": "0:0:0.432", "Total
Requests made to DataSource": "1", "Total Rows Fetched": "0", "Total
Documents Processed": "0", "Total Documents Skipped": "0", "Full Dump
Started": &quo
g
On Mon, Sep 10, 2018 at 12:00 PM Andrea Gazzarini
mailto:a.gazzar...@sease.io>> wrote:
You can check the solr.log or the solr-console.log. Another option
is to
activate the debug mode in the Solr console before running the
data import.
Andrea
On 10/09/2018 16:
You can check the solr.log or the solr-console.log. Another option is to
activate the debug mode in the Solr console before running the data import.
Andrea
On 10/09/2018 16:57, Monique Monteiro wrote:
Hi all,
I have a data import handler configured with an Oracle SQL query which
works like a
otidas, body:ii], 0, true),
spanNear([body:ec, body:3.1.3.5], 0, true)]))
Best,
Andrea
On 05/09/18 16:10, Andrea Gazzarini wrote:
You're right, my answer forgot to mention the *tokenizerFactory*
parameter that you can add in the filter declaration. But, differently
from what you think
Hi, please expand a bit. Specifically:
* what are those text files? Configuration files? You want something
like a central point where to manage things like stopwords, synonyms?
* I don't think the shareLib folder has been created for this usage.
However, please post the complete message
tidase II,... -> standardTokenizer ->
Cytosolic, 5, nucleotidase, II
So the two graphs should match.. or I'm wrong?
Thank you
Danilo
ody:On 05/09/2018 13:23, Andrea Gazzarini wrote:
Hi Danilo,
let's see if this can help you (I'm sorry for the poor debugging, I'm
reading
Hi Danilo,
let's see if this can help you (I'm sorry for the poor debugging, I'm
reading & writing from my mobile): the first issue should have something
to do with synonym overlapping and since I'm very curious about what it
is happening, I will be more precise when I will be in front of a lap
Hi Luca,
I believe this is not an easy task to do passing through Solr/Lucene
internals; did you try to use what Solr offers out of the box?
For example, you could define several fields associated where each
corresponding field type uses a different synonym set. So you would have
* F1 -> FT1
ut a way to do so with SolrJ.
On Wed, 29 Aug 2018 at 14:21, Andrea Gazzarini wrote:
Well, I don't know the actual reason why the behavior is different
between Cloud and Embedded client: maybe things are different because in
the Embedded Solr HTTP is not involved at all, but I'm just
solution would be to send the Solr params as a json in
the request body, but I am not sure if SolrJ supports this.
Alfonso.
On Wed, 29 Aug 2018 at 13:46, Andrea Gazzarini wrote:
I think that's the issue: just guessing because I do not have the code
in front of me.
POST requests put the
ovider.java>
.
Best,
Alfonso.
On Wed, 29 Aug 2018 at 12:57, Andrea Gazzarini wrote:
Hi Alfonso,
could you please paste an extract of the client code? Specifically those
few lines where you create the SolrQuery with params.
The line you mentioned is dealing with ContentStream which as far
Hi Alfonso,
could you please paste an extract of the client code? Specifically those
few lines where you create the SolrQuery with params.
The line you mentioned is dealing with ContentStream which as far as I
remember wraps the request body, and not the request params. So as
request body Sol
A search component is something which contributes to the overall
returned response. With "contributes" I mean "adds something".
The highlighting is an example of such behavior: it depends on the query
component and on top of a set of search results it enriches the response
with an additional sec
Hi Roy, I think you miss the autoGeneratePhraseQueries=true in the field
type definition.
I was on a slightly different use case when I met your same issue (I was
using synonyms expansion at query time) and honestly I didn't understand
why this is not the default and implicit behavior. In other
to the new graph based factories, where we
stopped filtering on insert for those and switched to filtering on query
based on recommendations from the Solr Doc.
Thanks,
TZ
On 8/15/18, 3:17 PM, "Andrea Gazzarini" wrote:
Hi Thomas,
as you know, the two analyzers play in a different moment, wi
Hi Thomas,
as you know, the two analyzers play in a different moment, with a
different input and a different goal for the corresponding output:
* index analyzer: input is a field value, output is used for building
the index
* query analyzer: input is a (user) query string, output is used f
Hi,
field names with both leading and trailing underscores are reserved [1]
so you it would be better to avoid that.
I cannot tell you what exactly the problem is, using such naming; I
remember I had troubles with function queries, so, in general, I would
follow that advice.
Best,
Andrea
[1]
Hi Rajnish,
yes, you can use a general-generic field (that's the price of having
such "centralization") where, by means of the *copyField* directive, all
fields are copied there.
Then, you can use that field as a default search field (df parameter) in
your RequestHandler.
...
Best,
Andre
Hi John,
Yes, it's possible.
Andrea
On Mon, 6 Aug 2018, 22:47 John Davis, wrote:
> Hi there,
> If a field is set as "ignored" (indexed=false, stored=false) can it be used
> for another field as part of copyfield directive which might index/store
> it.
>
> John
>
gs
Georg
Am 31.07.2018 um 10:53 schrieb Andrea Gazzarini:
Hi Georg,
I would say, without knowing your context, that this is not what Solr
is supposed to do. You're asking to load everything in a single
request/response and this poses a problem.
Since I guess that, even we assume it works, you
Hi Georg,
I would say, without knowing your context, that this is not what Solr is
supposed to do. You're asking to load everything in a single
request/response and this poses a problem.
Since I guess that, even we assume it works, you should then iterate
those results one by one or in blocks,
Hi Mario, could you please share your settings (e.g. OS, JVM memory,
System memory)?
Andrea
On 27/07/18 11:36, Bisonti Mario wrote:
Hallo
I obtain the error indexing a .xlsm or .xlsx file of 11 MB
What could I do?
Thanks a lot
Mario
2018-07-27 11:08:25.634 WARN (qtp1521083627-99) [ x:cor
Hi Driss,
I think the answer to the first question is yes, but I guess It doesn't
help you so much.
Second and third questions: "It depends", you should describe better your
contest, narrowing questions ad much as possibile ("how can web do It" is
definitely top much generic)
Best,
Andrea
Il lun
Hi Ennio,
could you please share:
* your configuration (specifically the field type declaration in your
schema)
* the query (please add debug=true) and the corresponding query response
Best,
Andrea
On 17/07/18 17:35, Ennio Bozzetti wrote:
I'm trying to get my synonyms to work, but for thi
Hi,
Please expand a bit your needs, because the answer to your question could
be different.
Specifically: is that nesting needed only for visualization purposes? Could
you please expand your access pattern (i.e. queries requirements)?
Even if Solr supports nested documents (just google "Solr nest
ks Andrea. i will write update processor in index pipe line.
>
> I feel this is very good feature to support.
>
> Thanks,
> Anil
>
> On 12 July 2018 at 22:59, Andrea Gazzarini wrote:
>
> > Hi Anil,
> > The copy Field directive is not what you're looking
Hi Anil,
The copy Field directive is not what you're looking for because it doesn't
change the stored value of a field.
What you need is an Update Request Processor, which is a kind of
interceptor in the indexing chain (i.e. It allows you to change an incoming
document before it gets indexed).
Unf
The syntax is valid in all those three examples, the right one depends on
what you need.
The first query executes a proximity search (you can think to a phrase
search, for simplicity) so it returns no result because probably you don't
have any matching docs with that whole literal.
The second is
Hi,
I mean you should use Maven which would pickup, starting from a number
(e.g. 6.6.1), all the correct dependencies you need for developing the
plugin.
Yes, the "top" libraries (e.g. Solr and Lucene) should have the same
version but on top of that, the plugin could require some other direct
Hi Zahra,
I think your guessing it's right: I see some mess in libraries versions.
If I got you
* the target platform is Solr 6.6.1
* the compile classpath includes solr-core-4.1.0, 1.4.0 (!) and lucene
7.4.0?
If that is correct, with a ClassCastException you're just scraping the
surface
ccurs when I set
required="true".
Can you please provide me some pointers to look see what may be the reason.
Thanks,
Rushikesh Garadade
On Sat, Jun 9, 2018 at 2:56 PM Andrea Gazzarini
wrote:
Hi Rushikesh,
I bet your client is not doing what you think. The error is clear, the
incomin
Hi Rushikesh,
I bet your client is not doing what you think. The error is clear, the
incoming document doesn't have that field.
I would investigate more on the client side. Without entering in
interesting fields like unit testing, I guess the old and good
System.out.println, just before sending th
Hi,
all what you need is in Solr, better: all what you need is Solr.
Solr is a server which exposes its services through HTTP, so you don't
need Apache at all (at least for a training course).
Best,
Andrea
On 06/06/18 08:57, azharuddin wrote:
I've got a question: I came across Apache Solr
<
Hi,
as far as I remember the Magento integration (at least the connector
version I worked with) doesn't have such capabilities, so if I remember
well and it is still valid, your need would require some custom (client)
code.
The alternative, which would move the search workflow entirely in Sol
Hi Sam, I have been in a similar scenario (not recently so my answer could
be outdated). As far as I remember caching, at least in that scenario,
didn't help so much, probably because the field size.
So we went with the second option: a custom SearchComponent connected with
Redis. I'm not aware if
Looking at the stack trace, which seem truncated, I would start from here
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Early EOF
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:190)
Could you please expand a bit your context (e.g. solr version, cloud /
Hi Sam,
I noticed the same behaviour. Looking at the code it seems that it is
expected: the two classes (ExtendedDisMaxQParser and DisMaxQParser)
don't have a direct inheritance relationships and the methods which deal
with the PF parameter are different. Specifically, the
DismaxQParser.getPhr
each
shard. This is still bad enough and you should use buildOnOptimize as
suggested but I just wanted to correct the wrong information I gave
earlier.
On Thu, Apr 20, 2017 at 6:23 PM, Andrea Gazzarini wrote:
Perfect, I don't need NRT at this moment so that fits perfectly
Thanks,
Andrea
0, 2017 at 5:29 PM, Andrea Gazzarini wrote:
Ok, many thanks
I see / read that it should be better to rely on the background merging
instead of issuing explicit optimizes, but I think in this case one optimize
in a day it shouldn't be a problem.
Did I get you correctly?
Thanks again,
Andre
Ok, many thanks
I see / read that it should be better to rely on the background merging
instead of issuing explicit optimizes, but I think in this case one
optimize in a day it shouldn't be a problem.
Did I get you correctly?
Thanks again,
Andrea
On 20/04/17 13:17, Shalin Shekhar Mangar wro
how can I build the suggest index
(more or less) just after that window? I'm ok if the build happens after
a reasonable delay (e.g. 1, max 2 hours)
Many thanks,
Andrea
On 20/04/17 11:11, Shalin Shekhar Mangar wrote:
Comments inline:
On Wed, Apr 19, 2017 at 2:46 PM, Andrea Gazzarini
Hi,
any help out there?
BTW I forgot the Solr version: 6.5.0
Thanks,
Andrea
On 18/04/17 11:45, Andrea Gazzarini wrote:
Hi,
I have a project, with SolrCloud, where I'm going to use the Suggester
component (BlendedInfixLookupFactory with DocumentDictionaryFactory).
Some info:
* I will
Hi,
I have a project, with SolrCloud, where I'm going to use the Suggester
component (BlendedInfixLookupFactory with DocumentDictionaryFactory).
Some info:
* I will have a suggest-only collection, with no NRT requirements
(indexes will be updated with a daily frequency)
* I'm not yet sure
I can see those names in the "Schema browser" of the admin UI, so I guess
using the (lucene?) API it shouldn't be hard to get this info.
I don' know if the schema api (or some other service) offer this service
Andrea
On 14 Apr 2017 10:03, "Midas A" wrote:
> Hi,
>
>
> Can i get all the field c
Hi Wunder,
I think it's the first option: if you have 3 values then the analyzer
chain is executed three times.
Andrea
On 12/04/17 18:45, Walter Underwood wrote:
Does the KeywordTokenizer make each value into a unitary string or does it take
the whole list of values and make that a single st
Hi,
I think you got an old post. I would have a look at the built-in
feature, first. These posts can help you to get a quick overview:
https://cwiki.apache.org/confluence/display/solr/Suggester
http://alexbenedetti.blogspot.it/2015/07/solr-you-complete-me.html
https://lucidworks.com/2015/03/04/
Solr doesn't know about "null" values. Using a schema which declares all
those fields (id, eventA, eventB, eventC) and indexing those 2 documents
1,eventA=2,eventC=3
2,eventB=1,eventA=1
You already get a situation "similar" to what you want. I said "Similar"
because you won't have any null val
Solr doesn't know about "null" values. Using a schema which declares all
those fields (id, eventA, eventB, eventC) and indexing those 2 documents
1,eventA=2,eventC=3
2,eventB=1,eventA=1
You already get a situation "similar" to what you want. I said "Similar"
because you won't have any null val
Hi Scott,
that could depend on a lot of things. Some questions:
* What is your commit policy? Explicit / auto / soft / hard ...
* "Other days things are off by a small amount between master and
slave"...what do you mean exactly? What is the behaviour you see in
terms of index versions bet
Hi Mugeesh,
my fault: a point is missing there, as suggested
/"//*-ea *//was not specified but "/
//
You need to add the "-ea" VM argument. If you are in Eclipse,
/Run >> Run Configurations/
then in the dialog that appears, select the run configuration
corresponding to that class (StartD
Hi Zaccheo,
I don't think this is possible, this is something related with the
classloader behavior, and even if there's a "priority" rule in the JVM,
I wouldn't rely on that in my application.
That could be good in a dev environment where you can specify the
"order" of the imported libraries (
On top of what Alessandro already told you, here's a brief post [1] that
can be useful for setting up your dev environment.
HTH
Andrea
[1]
http://andreagazzarini.blogspot.it/2016/11/quickly-debug-your-solr-add-on.html
On 30/01/17 11:16, alessandro.benedetti wrote:
Generally speaking I ass
Hi Deepak,
the latest version is the 6.3.0 and I guess it is the best to pick up.
Keep in mind that 3.6.1 => 6.3.0 is definitely a big jump.
In general, I think once a version is made available, that means it is
(hopefully) stable.
Best,
Andrea
On 16/12/16 08:10, Deepak Kumar Gupta wrote:
lie Hull" wrote:
> On 05/12/2016 09:18, Andrea Gazzarini wrote:
>
>> Hi guys,
>> I developed this handler [1] while doing some work on a Magento -> Solr
>> project.
>>
>> If someone is interested (this is a post [2] where I briefly explain the
>> goal),
act to fuzzier types of queries to get the counts.
Erik
> On Dec 5, 2016, at 9:08 AM, Charlie Hull wrote:
>
> On 05/12/2016 09:18, Andrea Gazzarini wrote:
>> Hi guys,
>> I developed this handler [1] while doing some work on a Magento -> Solr
>> project.
Hi guys,
I developed this handler [1] while doing some work on a Magento -> Solr
project.
If someone is interested (this is a post [2] where I briefly explain the
goal), or wants to contribute with some idea / improvement, feel free to
give me a shout or a feedback.
Best,
Andrea
[1] https://git
Hi,
I found a strange behavior with the MappingCharFilterFactory in Solr
*6.2.1*. Definitely curious if I'm missing something or someone else met
that.
I have a (index and query) chain composed as follows:
mapping="mapping-FoldToASCII.txt"/>
...
The mapping-FoldToASCII.txt is the exact fil
Hi Francesco,
On 29/09/16 10:47, marcyborg wrote:
Hi Andrea,
Thanks very much for your complete reply.
You're right, I'm new about Solr, so I'm sorry if'm asking trivial
questions, or I'not exaustive in my questions!
About the scenario, I try to explain it:
I have to load the thesaurus in Solr
Hi,
here [1] you can find one way to do that. You can start such class as a
JUnit test or a simple main, using Maven or not.
Best,
Andrea
[1]
http://stackoverflow.com/questions/31521345/solr-5-integration-tests-with-maven#33189271
On 28/09/16 15:13, todhanda wrote:
I am using Solr 5.3, and
Hi Francesco,
I think an information it's missing here: what are you trying to do
concretely? "Using Solr as semantic search engine" means at the same
time everything and nothing :) and (I guess) it involves something more
than a thesaurus.
Keeping things simple, and assuming your only concern
Hi, I don't believe there's something for doing that in Solr and
personally I'm not aware if someone developed such filter.
Please have a look at this exchange [1], where Hoss gave some useful
hints about this topic.
Best,
Andrea
[1] http://osdir.com/ml/solr-user.lucene.apache.org/2010-12/ms
You're welcome ;) is that close to what you were looking for?
On 2 Jul 2016 11:53, "Mark Robinson" wrote:
> Thanks much Andrea esp. for the suggestion of SolrCoreAware!
>
>
>
> Best,
> Mark.
>
> On Thu, Jun 30, 2016 at 10:23 AM, Andrea Gazzarini
> wrote
Hi,
the lifecycle of your Solr extension (i.e. the component) is not
something that's up to you.
Before designing the component you should read the framework docs [1],
in order to understand the context where it will live, once deployed.
There's nothing, as far as I know, other than the compon
Sure, this is the API reference [1] where you can see, you can add types
and fields
Andrea
[1] https://cwiki.apache.org/confluence/display/solr/Schema+API
On 03/06/16 17:07, Jamal, Sarfaraz wrote:
Hi Guys,
I found the following article:
http://thinknook.com/keyword-stemming-and-lemmatisatio
Hi Carl,
This address is valid, any subscribed user received a copy of your email.
solr-user@lucene.apache.org
Andrea
On 21 May 2016 15:10, "Carl Roberts" wrote:
> And, these response are just weird. Do they mean this user list is
> obsolete? is solr no longer supported via a user list where
at 1:39 PM, Andrea Gazzarini wrote:
Hi Joel,
many thanks for the response and sorry for this late reply.
About the first question, I can open a JIRA for that. Instead, for
disabling the component I think it would be useful to add
- an automatic behaviour: if the sort criteria excludes the score the
be a new feature of the ReRanker. I think
> it's a good idea but it's not implemented yet.
>
> I'm not sure if anyone has any ideas about conditionally adding the
> ReRanker using configurations?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fr
Hi guys,
I have a Solr 4.10.4 instance with a RequestHandler that has a
re-ranking query configured like this:
dismax
...
{!boost b=someFunction() v=$q}
{!rerank reRankQuery=$rqq reRankDocs=60
reRankWeight=1.2}
score desc
Everythi
Although what you pasted isn't the complete schema I guess you miss a
wrote:
> Error :
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Could not load conf for core demo7: copyField dest :'i_member_id' is not an
> explicit field and doesn't match a dynamicField.. S
Hi Adel,
As far as I know, the mailing list doesn't allow attachments. Please paste
the relevant part of your log
Andrea
On 28 Mar 2016 11:18, "Adel Mohamed Khalifa"
wrote:
> Hello All,
>
>
>
> I failed to connect solr server through my website, I attached my solr log
> if anyone can help me ple
I connect
> from windows it’s the same code, I did not change in it.
>
>
>
> SOLR_SERVER_URL=http://172.16.0.72:8983/solr/SearchCore
>
>
>
> Regards,
> Adel Khalifa
>
>
>
>
>
> From: Andrea Gazzarini [mailto:gxs...@gmail.com]
> Sent: Sunday, March 27,
Hi Adel,
Absolutely not sure what's happening on (Solr) server side, the first thing
that comes on my mind is: if you're correctly accessing the solr admin
console that means the string you're getting in that resource bundle is
wrong. I'd print out that value in order to make sure about the correct
o you.
Andrea
On 26 Mar 2016 15:46, "Anil" wrote:
> HI Alex,
>
> i am still no clear how an event is notified in my application if it
> listener is configured in SolrConfig.xml (centralized solr server). can you
> please clarify?
>
> Sorry for dumb question.
>
&g
;Anil" wrote:
> Thanks Alex. Let me digg more in that area.
>
> On 26 March 2016 at 19:40, Andrea Gazzarini wrote:
>
> > Event listeners are custom classes so you could do "anything"however
> I
> > guess the event firing is synchronous so the listener
Event listeners are custom classes so you could do "anything"however I
guess the event firing is synchronous so the listener logic should be
no-blocking and as fast as possible.
But this is my guess, I hadn't look at the code. Instead, if the listener
invocation is asynch then forget my commen
olr_5_3_1/solr/solrj/src/java/org/apache/solr/common/SolrInputDocument.java#L150
> >
> & setField
> <
> http://www.solr-start.com/javadoc/solr-lucene/org/apache/solr/common/SolrInputDocument.html#setField-java.lang.String-java.lang.Object-float-
> >
> ?
>
>
As far as I know, this is how Solr works (e.g. it replaces the whole
document): how do you replace only a part of a document?
Just send a SolrInputDocument with an existing (i.e. already indexed) id
and the document (on Solr) will be replaced.
Andrea
2015-12-19 8:16 GMT+01:00 Debraj Manna :
> C
1 - 100 of 227 matches
Mail list logo